Linkedin_ads
Server Details
LinkedIn Ads MCP Pack
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-linkedin_ads
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 10 of 10 tools scored. Lowest: 2.9/5.
Tools like li_list_ad_accounts, li_list_campaigns, and li_list_creatives have clear boundaries, but ask_pipeworx and discover_tools overlap in purpose (both help find tools/info), and the memory tools (remember/recall/forget) are unrelated to the LinkedIn ads core, causing confusion.
The LinkedIn ads tools use a consistent 'li_' prefix with descriptive verb_noun style (e.g., li_list_campaigns), but the Pipeworx and memory tools break this pattern with generic names like ask_pipeworx, discover_tools, remember, recall, and forget.
10 tools is reasonable for a server that combines LinkedIn ads functionality with memory and meta-tools. The count is slightly high due to the inclusion of unrelated utility tools, but still manageable.
For LinkedIn ads, basic read operations are covered (list accounts, campaigns, creatives, get campaign, analytics), but missing create/update/delete for campaigns and creatives, leaving significant gaps for ad management.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states that the tool picks the right tool and fills arguments, but does not disclose potential limitations, such as which data sources it can access, how it handles ambiguous questions, or whether it requires authentication. This is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with only three sentences that clearly state the tool's purpose, how it works, and examples. Every sentence adds value, and the structure is front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema, no annotations), the description is mostly complete. It covers what the tool does, how to use it, and provides examples. However, it lacks details on limitations or error cases, but these are less critical for such a straightforward tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (one parameter with description). The description adds value by explaining that the question is in natural language and provides examples, going beyond the schema's basic 'Your question or request in natural language'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to answer questions in plain English by automatically selecting the best data source. It uses a specific verb ('ask') and resource ('Pipeworx'), and distinguishes itself from sibling tools by positioning as an intelligent routing tool that eliminates the need to browse other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: when you want to ask a question in natural language and have it routed to the best source. It implies not to use it when you need to control which tool or arguments are used, and gives examples that cover diverse use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes behavior well (search, return results, ordering by relevance). Since no annotations are provided, the description carries full burden and does a good job, though it could mention any rate limits or side effects (none expected). No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action verb and purpose. Every sentence adds value: first states function, second gives usage directive. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains return format (tool names and descriptions). Simple tool with clear schema, and the description covers when to use it completely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already has 100% coverage with detailed descriptions for both parameters. Description does not add new information beyond what schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it searches a tool catalog by description, returns relevant tool names/descriptions, and is meant to be called first when many tools are available. Distinct from sibling tools which are action-oriented or memory-related.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Call this FIRST' and provides guidance on when to use it (when 500+ tools available and need to find right ones). Implicitly contrasts with sibling tools that are for specific actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry behavioral disclosure. It states it deletes a memory, which is a destructive operation, but does not mention if deletion is permanent or reversible, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that clearly conveys the purpose with no unnecessary words. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (1 param, no output schema, no nested objects), the description covers the essential purpose. However, it lacks context about return value (e.g., success confirmation) and behavioral guarantees, which would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single 'key' parameter described as 'Memory key to delete'. The description adds no extra meaning beyond the schema, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('stored memory by key'). It distinguishes from sibling tools like 'remember' (store) and 'recall' (retrieve) by explicitly using 'Delete'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (like recall for reading). The name 'forget' implies deletion, but no exclusion criteria or prerequisites are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
li_campaign_analyticsCInspect
Analyze campaign performance over a date range (e.g., "2024-01-01" to "2024-01-31"). Returns impressions, clicks, conversions, spend, and CTR by campaign.
| Name | Required | Description | Default |
|---|---|---|---|
| campaign_ids | Yes | Array of campaign IDs to query | |
| date_range_end | Yes | End date (YYYY-MM-DD) | |
| date_range_start | Yes | Start date (YYYY-MM-DD) | |
| time_granularity | No | Granularity: DAILY, MONTHLY, or ALL (default ALL) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It does not mention whether the operation is read-only, any rate limits, authentication needs, or the nature of returned data (e.g., aggregated metrics). This is insufficient for a tool querying multiple campaigns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, concise and front-loaded with the key information. It earns its place, though it could be slightly more informative without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of querying multiple campaigns over a date range with optional granularity, the description lacks details about expected return values, pagination, or error handling. With no output schema, the description should compensate, but it does not.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema's parameter descriptions, which already cover the required parameters and optional time_granularity. No further semantic context is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves analytics for LinkedIn ad campaigns over a date range, specifying the resource (ad campaigns) and action (get analytics). It differentiates from sibling tools like li_get_campaign, which likely returns campaign details rather than analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as li_get_campaign for individual campaign details or li_list_campaigns for listing campaigns. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
li_get_campaignBInspect
Get full details for a specific LinkedIn campaign (e.g., campaign ID "501234567"). Returns name, budget, spend, status, targeting, and performance metrics.
| Name | Required | Description | Default |
|---|---|---|---|
| campaign_id | Yes | Campaign ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries the full burden. It correctly implies a read-only operation (get details) but does not disclose any behavioral traits such as required permissions, rate limits, or what 'details' entails. A score of 3 is appropriate as the description is not misleading but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that clearly states the tool's purpose. No extraneous information is present, but it could be slightly more detailed without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter and no output schema or annotations, the description is minimally complete. It covers the basic purpose but lacks behavioral details and context needed for an agent to use it effectively in a complex workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description does not add any additional meaning beyond the schema's 'Campaign ID' label. It does not explain how to obtain the ID or any format constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('LinkedIn ad campaign') to clearly state the tool's purpose. However, it does not distinguish it from sibling tools like 'li_campaign_analytics' or 'li_list_campaigns', missing an opportunity to clarify scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. For instance, when to use 'li_get_campaign' vs 'li_campaign_analytics' for details vs analytics is not addressed. The description lacks context about prerequisites or use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
li_list_ad_accountsBInspect
Check which LinkedIn ad accounts you can access. Returns account IDs, names, and status to identify which account to use for campaigns.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Max results (default 10) | |
| start | No | Pagination offset (default 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose any behavioral traits such as pagination behavior, rate limits, authentication requirements beyond 'authenticated user', or what happens if no accounts exist. For a list operation, these are important.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is perfectly concise and front-loaded. Every word is necessary and informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (list with pagination, no output schema, no annotations), the description is minimally adequate. It does not explain the return format or default behavior for count/start, but the schema covers the parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description does not add any parameter information beyond what the schema already provides. It could mention that count and start control pagination, but this is already in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (list), resource (LinkedIn ad accounts), and scope (accessible by the authenticated user). It is specific and distinguishes from sibling tools like li_list_campaigns which list a different resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. However, the description implies it is for viewing accessible accounts, which is a typical first step before using other tools. No exclusions or when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
li_list_campaignsBInspect
Get all campaigns in a LinkedIn ad account (e.g., account ID "501234567"). Returns campaign IDs, names, budgets, status, and date ranges.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Max results (default 10) | |
| start | No | Pagination offset (default 0) | |
| account_id | Yes | Sponsored account ID (numeric, e.g., "508127070") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It only states 'list', which implies a read-only operation, but doesn't disclose any behavioral traits like rate limits, data freshness, or whether the list is paginated (though schema hints at pagination). Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that clearly communicates the purpose. No wasted words. It could be slightly more informative, but it is appropriately short.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 parameters, no output schema, no annotations), the description is minimally complete. It identifies the action and resource, but lacks guidance on usage and behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the description adds no extra meaning beyond the schema. The parameter descriptions in the schema are already clear (count, start, account_id). A baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists campaigns for a LinkedIn ad account, using the verb 'list' and the resource 'campaigns for a LinkedIn ad account'. This differentiates it from siblings like li_campaign_analytics and li_get_campaign.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides basic context for listing campaigns, but lacks explicit guidance on when to use this tool versus alternatives such as li_get_campaign for a single campaign or li_campaign_analytics for analytics. It does not mention exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
li_list_creativesAInspect
View all ads in a LinkedIn campaign (e.g., campaign ID "501234567"). Returns creative IDs, titles, content, status, and creation dates to compare variations.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Max results (default 10) | |
| start | No | Pagination offset (default 0) | |
| campaign_id | Yes | Campaign ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose any behavioral traits such as whether it's read-only, destructive, rate limits, or authentication requirements. It only states the basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. It front-loads the action and resource. Appropriate for a simple list operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters, no output schema, and no annotations, the description is too sparse. It doesn't explain the return format, pagination behavior, or any additional context needed to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema; it just names the action. It does not clarify parameter usage or constraints beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'creatives (ads)', and specifies the scope 'for a specific LinkedIn campaign'. It distinguishes itself from sibling tools like li_get_campaign (single campaign) and li_list_campaigns (campaigns, not creatives).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use: when you need to list creatives for a given campaign. It doesn't explicitly state when not to use it or mention alternatives, but given the sibling names, the distinction is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must cover behavior. Describes the two modes and the purpose, but does not disclose side effects (e.g., read-only vs state changes), return format, or error handling. Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero wasted words. Front-loaded with the core action. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 optional param, no output schema, no nested objects), the description sufficiently covers retrieval semantics. Lacks only minor details like return format or session scope, but not critical for a basic retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (key description matches purpose). Description adds context about listing when omitted, which the schema already implies. No extra semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'retrieve' and resource 'memory' with two distinct behaviors (by key or list all). Distinguishes from sibling tools like 'remember' and 'forget' by specifying retrieval vs storage/deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to omit key to list all, and frames use case as retrieving context saved earlier. Lacks explicit when-not-to-use or comparison with siblings, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses important behavioral traits: persistence differences between authenticated users (persistent) and anonymous sessions (24-hour expiry). This adds value beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each with distinct purpose: action+resource, usage context, and behavioral note. Could be slightly more concise by removing redundant phrasing, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simplicity (2 required string params, no output schema), the description is fully complete. It explains what is stored, why to use it, and behavioral differences. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add additional parameter meaning beyond what the schema already provides (key and value descriptions are self-explanatory).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Store' and resource 'key-value pair in session memory'. Distinguishes itself from sibling 'recall' (which retrieves) and 'forget' (which deletes) by specifying the action of saving data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage guidance: 'Use this to save intermediate findings, user preferences, or context across tool calls.' It gives concrete examples of when to use. However, it does not explicitly state when NOT to use it or mention alternatives like 'recall' for retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!