Linear
Server Details
Linear MCP — wraps the Linear GraphQL API (OAuth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-linear
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 10 of 10 tools scored. Lowest: 3.2/5.
The first three tools (ask_pipeworx, discover_tools, forget) are about a separate 'Pipeworx' system, not Linear. Among the Linear tools, linear_search and linear_list_issues overlap significantly; both return issues based on text queries or filters, making it unclear when to use one over the other. Also, ask_pipeworx claims to 'pick the right tool' but is itself a tool, creating ambiguity.
The Linear-specific tools follow a consistent 'linear_verb_noun' pattern (linear_create_issue, linear_get_issue, etc.), but the Pipeworx tools use different styles: 'ask_pipeworx', 'discover_tools', 'forget', 'recall', 'remember'. This mix of naming conventions reduces overall consistency.
There are 10 tools total, but 5 are dedicated to Pipeworx (ask, discover, forget, recall, remember) and 5 to Linear. The Linear subset (5 tools) feels slightly thin for a project management tool; missing updates and deletions. The Pipeworx tools add bulk without clear integration with Linear, making the count feel inflated.
For Linear, the tools cover create, read, list, and search, but lack update_issue, delete_issue, and operations for comments or workflow transitions. The Pipeworx tools provide memory and discovery but no actual data operations beyond asking questions. The overall surface has notable gaps for both domains.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool selects the best data source and fills arguments, which is behavioral. No annotations are provided, so the description carries the full burden. It does not mention any side effects, authorization needs, or rate limits, but the tool appears to be read-only and non-destructive, which is inferred.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences plus examples) and front-loaded with the core purpose. Every sentence adds value. The examples are helpful but add length; still appropriate for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (one string parameter), no output schema, and no annotations, the description is largely complete. It explains the tool's behavior and provides examples. It could mention that results are returned as text, but that is implied.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a single 'question' parameter described as 'Your question or request in natural language'. The description adds context by specifying 'in plain English' and providing examples, but the schema already conveys the essential meaning. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool answers natural language questions by selecting the appropriate data source and filling arguments. It specifies the verb ('Ask'), the resource ('Pipeworx'), and distinguishes it from sibling tools that are more specific (e.g., linear_* tools).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'No need to browse tools or learn schemas — just describe what you need.' It gives examples of appropriate questions. However, it does not explicitly state when not to use this tool or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Although no annotations are provided, the description clearly states the tool's behavior: it searches a catalog and returns tool names and descriptions. It does not reveal performance characteristics (e.g., search algorithm, indexing) but adequately describes the core function and typical usage scenario.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at two sentences, front-loads the action and resource, and every sentence adds value. The first sentence states what it does, and the second sentence explains when to use it. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple search, 2 params, no output schema), the description is complete: it explains what it does, when to use it, and what it returns. The schema covers parameter details, so no further elaboration is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for both parameters. The description adds value by explaining the return type ('names and descriptions') and giving example queries, which helps the agent formulate effective queries. However, it does not add significant new meaning beyond the schema's parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb+resource ('Search the Pipeworx tool catalog') and clearly distinguishes from siblings by stating 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This makes its purpose unique among sibling tools like ask_pipeworx or linear_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This tells the agent when to use it and implies it's a preliminary step, distinguishing it from direct-action tools like linear_create_issue.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It confirms deletion (a destructive operation) but does not mention irreversibility, error handling, or authorization requirements. 'Delete' implies mutation, but the agent needs more context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with 6 words, no redundancy, and front-loaded verb 'Delete'. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 required param, no output schema, no annotations), the description is adequate but could mention that deletion is irreversible or that the key must exist.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single required parameter 'key' described as 'Memory key to delete'. The description adds minimal extra meaning, but the schema already provides full coverage, so a baseline of 3 is adjusted up slightly because the description reiterates the parameter's purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb 'Delete' with a specific resource 'stored memory' and the qualifier 'by key', distinguishing it from sibling tools like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for deletion but provides no guidance on when to use it versus alternatives (e.g., 'recall' to read, 'remember' to store) or any prerequisites or consequences.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
linear_create_issueAInspect
Create a new issue in Linear with title and optional description. Returns issue ID, key, title, and URL.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Issue title | |
| teamId | Yes | Team ID to create the issue in | |
| priority | No | Priority level: 0 (none), 1 (urgent), 2 (high), 3 (medium), 4 (low) | |
| description | No | Issue description (markdown supported) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explicitly states that it creates a new issue (write operation) and returns specific fields. Since annotations are empty, the description adequately conveys the behavioral trait that this is a creation tool, but does not mention any destructive behavior or side effects (which are not expected).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the core purpose and key result. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a simple creation action with a well-defined schema and no output schema, the description covers the essential purpose and return value. It could mention more about optional parameters like priority or description, but the schema already provides that detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds value by mentioning the return fields, which helps infer parameter importance. However, it does not add meaning beyond what the schema already provides for each parameter, but the return context compensates slightly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (create), resource (issue), and tool (Linear). It also specifies what is returned (ID, identifier, title, URL), making it easy to distinguish from sibling tools like linear_get_issue or linear_list_issues.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for creating issues but provides no guidance on when to use this tool versus alternatives like linear_get_issue or linear_search. No explicit when-not or context about prerequisites is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
linear_get_issueAInspect
Get full details of a Linear issue by ID (e.g., "ABC-123"). Returns title, description, state, priority, assignee, labels, comments, and URL.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Issue identifier (e.g., "ABC-123") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It discloses that the tool returns 'full issue details' and lists the fields included (title, description, state, priority, assignee, labels, comments). This provides good transparency about the response content, though it does not mention whether the tool is read-only or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences: the first states the core purpose, the second lists the returned fields. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a simple lookup with one parameter and no output schema, the description provides sufficient information for an agent to select and invoke the tool. It lists the key fields returned, which compensates for the lack of output schema. However, it could mention if the tool requires authentication or any rate limiting.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema coverage is 100% with a single parameter 'id' described as 'Issue identifier (e.g., "ABC-123")'. The description restates the same example, adding no new semantic meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'a single Linear issue by its ID', and includes the specific ID format 'ABC-123'. It distinguishes from sibling tools like linear_list_issues (which lists multiple) and linear_create_issue (which creates).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a specific issue ID is known, but does not explicitly state when not to use it (e.g., for searching by other criteria, use linear_search instead) or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
linear_list_issuesBInspect
Browse issues in your Linear workspace with optional filters by state, priority, or assignee. Returns issue ID, title, state, priority, assignee, and URL.
| Name | Required | Description | Default |
|---|---|---|---|
| first | No | Number of issues to return (default 20, max 50) | |
| filter | No | Optional filter object as JSON string (e.g., {"state":{"name":{"eq":"In Progress"}}}). Passed directly to the Linear issues query filter. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose behavioral traits such as pagination behavior beyond 'first' parameter, rate limits, ordering, or whether results are truncated. The description only lists return fields, lacking operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the main action and lists return fields. No wasted words, but could be slightly more structured with separate lines for parameters and returns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (2 params, no output schema), the description is mostly adequate but lacks behavioral transparency (pagination, defaults beyond 'first', ordering). The return fields are covered, but no details on empty results or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema: 'first' and 'filter' are already explained. The description mentions 'optional filtering' but provides no additional context about filter syntax beyond the schema's JSON example.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists issues from Linear with optional filtering, and enumerates the fields returned (ID, title, state, priority, assignee, URL). It distinguishes from siblings like linear_get_issue (single issue) and linear_create_issue (creation). However, it could more explicitly contrast with linear_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing with optional filters, but does not provide explicit guidance on when to use this tool versus alternatives like linear_search or linear_get_issue. No when-not-to-use or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
linear_list_teamsAInspect
List all teams in your Linear workspace. Returns team ID, name, key, and description.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states that the tool returns specific fields, which provides basic transparency. However, there are no annotations (e.g., readOnlyHint) to supplement this, and the description does not disclose any side effects, authorization needs, or limits. For a read-only list operation with no parameters, this is adequate but not exceptional.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no wasted words. It is front-loaded with the action and resource, then lists the return values. Perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with no parameters and no output schema. The description covers the purpose and return fields completely. It could mention that this is a paginated list or that it returns all teams, but for a simple list operation, it is nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters and schema coverage is 100%, so the description does not need to explain parameters. The description adds value by listing the returned fields, which helps the agent understand the output without needing an output schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and resource 'teams in the Linear workspace', and specifies the returned fields (ID, name, key, description). It is distinct from sibling tools like linear_create_issue or linear_list_issues, which operate on different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this tool is for listing all teams without filters, and the lack of parameters confirms it requires no additional input. It does not explicitly mention when not to use it or compare with alternatives, but the context is clear given the zero-parameter schema and sibling tools that handle other tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
linear_searchAInspect
Search Linear issues by keyword or text. Returns matching issues with ID, title, state, priority, and URL.
| Name | Required | Description | Default |
|---|---|---|---|
| first | No | Number of results to return (default 20, max 50) | |
| query | Yes | Search query text |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It discloses that it returns specific fields (ID, title, state, priority, URL) but does not mention whether the operation is read-only, any rate limits, or potential side effects. As a search tool, it is implicitly read-only, but this is not explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that efficiently convey the tool's purpose and output. No redundant or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema, no nested objects), the description is adequate but minimal. It covers the purpose and return fields, but could mention that results are paginated or sorted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description does not add meaning beyond the schema: it lists returned fields but does not elaborate on the 'query' parameter's syntax or the 'first' parameter's behavior beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (search), the resource (Linear issues), and the method (by text query). It also lists the returned fields (ID, title, state, priority, URL), distinguishing it from siblings like linear_list_issues which likely lists all issues without text search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for text-based search, but does not explicitly differentiate from other Linear tools like linear_list_issues or linear_get_issue. It lacks guidance on when not to use this tool or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description carries full burden. It discloses that omitting key lists all memories, which is a key behavioral trait. However, it doesn't mention persistence across sessions or data format, but is adequate for a simple retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with 0 required parameters and no output schema, the description is sufficient. It explains both retrieval modes and when to use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'key' described. The description adds context: omitting key lists all memories, which goes beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. This distinguishes it from 'remember' (write) and 'forget' (delete) tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to use this for retrieving context saved earlier, implying when to use. It does not explicitly contrast with alternatives like 'ask_pipeworx' or 'discover_tools', but given sibling names, the purpose is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses persistence differences: authenticated users get persistent memory, anonymous sessions last 24 hours. This adds useful behavioral context beyond the input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each adding distinct value: what it does, when to use, and behavioral nuance. No wasted words, front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 simple params, no output schema), the description covers purpose, usage, and behavioral traits sufficiently. It does not explain return value, but that is acceptable as output schema is absent and the action is straightforward.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add parameter-specific meaning beyond what the schema already provides. The schema examples for key and value are already clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, with specific verb 'store' and resource 'key-value pair'. It distinguishes from siblings like 'recall' and 'forget' by defining the write operation for memory.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use it: to save intermediate findings, user preferences, or context across tool calls. It implies use over alternative tools like 'recall' or 'forget' but does not explicitly mention when not to use it or compare to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!