Comtrade
Server Details
Comtrade MCP — UN Comtrade API for international bilateral trade data
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-comtrade
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 9 of 9 tools scored.
Several tools have clear distinct purposes (trade data, country codes, memory operations), but ask_pipeworx and discover_tools overlap in intent (both help find or execute tasks), causing potential confusion. The memory tools (forget, recall, remember) are well-separated.
Most tools use consistent snake_case with descriptive names (comtrade_trade_data, comtrade_top_commodities). However, ask_pipeworx and the memory tools (forget, recall, remember) break the pattern, using simple verbs instead.
9 tools is appropriate for a server that combines trade data retrieval, a natural language interface, and memory operations. The count feels slightly higher than necessary if ask_pipeworx subsumes some functions, but still reasonable.
The trade tools cover key queries (data, top partners, top commodities) but lack operations like listing available classifications or time series. The memory tools are basic (CRUD without search). The natural language ask_pipeworx may fill gaps, but its scope is unclear.
Available Tools
9 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behavioral traits: it picks the right tool, fills arguments, and returns results. It implies the tool may have broad capabilities but doesn't specify limitations or which data sources are available. No annotations are provided, so the description carries full burden, but it sufficiently sets expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded with the core purpose. It uses two clear sentences followed by examples. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema), the description is complete. It explains what the tool does, how to use it, and provides examples. No further details are necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaning beyond the input schema by explaining that the 'question' parameter should be in plain English and gives examples. Schema coverage is 100%, so the baseline is 3; the description adds extra value with usage examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It uses specific verbs ('ask', 'get') and describes the resource ('answer from best available data source'). It differentiates from sibling tools by highlighting its natural language interface and automatic tool selection, contrasting with the more specific sibling tools like comtrade_trade_data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It implicitly suggests using other tools when you have a specific data source in mind, as this tool abstracts away tool selection. Examples illustrate appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comtrade_country_codesAInspect
Look up country ISO numeric codes for trade queries (e.g., "840" = US, "156" = China). Returns code and country name pairs.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavior. It states 'No API call needed', indicating fast, local retrieval. However, it doesn't specify what happens if the list is empty or if it returns all countries or only common ones.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, no fluff. Front-loaded with purpose, then efficiency note.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no parameters and no output schema, but the description explains its static nature. For a simple reference list, this is adequate, though more detail on what 'common' means could help.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has no parameters and 100% coverage, so baseline is 3. The description adds no parameter info, but none is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a reference list of common country ISO numeric codes for UN Comtrade queries. It distinguishes itself from data query tools like comtrade_trade_data by being a reference/list tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Get a reference list' and 'No API call needed', implying it's a static lookup and not a live query. Sibling tools like comtrade_trade_data are for actual data, so this tool is for reference only.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comtrade_top_commoditiesAInspect
Find top commodities traded between two countries ranked by value. Returns product categories and trade volumes.
| Name | Required | Description | Default |
|---|---|---|---|
| flow | Yes | Trade flow: "M" for imports, "X" for exports | |
| year | Yes | Trade year (e.g., "2024") | |
| limit | No | Number of top commodities to return (default 20) | |
| partner_code | Yes | ISO numeric country code for the partner country | |
| reporter_code | Yes | ISO numeric country code for the reporting country |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It correctly states the tool retrieves top commodities by trade value, implying a read-only operation. However, it doesn't disclose behavior like default limit (20), sorting direction, or whether results are aggregated. With zero annotations, the description should provide more behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences that are front-loaded with the core purpose. No wasted words. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 params, no output schema), the description is somewhat minimal. It explains the output conceptually (which product categories dominate) but doesn't specify the return format (e.g., list of HS codes with values). With no output schema, the description should provide more detail on what the agent will receive. However, the tool is relatively straightforward, so a 3 is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds meaning by explaining the tool's purpose (top commodities by trade value), which implies the limit parameter controls the number of results. However, it doesn't elaborate on parameter relationships or constraints beyond what the schema provides. Baseline 3 plus some added context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets 'top traded commodities between two countries by trade value' and explains what it shows. It uses specific verbs and resources, and distinguishes itself from siblings like comtrade_trade_data (which likely provides detailed data) and comtrade_top_partners (which focuses on partners). However, it could more explicitly differentiate from comtrade_top_partners.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when analyzing bilateral trade composition, but provides no explicit guidance on when to use this vs. comtrade_trade_data or comtrade_top_partners. No exclusions or alternatives are mentioned. The context signals indicate sibling tools exist, but the description doesn't leverage this to guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comtrade_top_partnersBInspect
Find a country's top trading partners ranked by trade volume. Returns partner countries and total trade values.
| Name | Required | Description | Default |
|---|---|---|---|
| flow | Yes | Trade flow: "M" for imports, "X" for exports | |
| year | Yes | Trade year (e.g., "2024") | |
| limit | No | Number of top partners to return (default 20) | |
| hs_code | No | Optional HS commodity code to filter by specific product | |
| reporter_code | Yes | ISO numeric country code (e.g., "842" for US) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States it returns top partners by trade value, which implies a sorted result. Does not disclose sorting order, pagination, or data freshness. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, concise and front-loaded with purpose. Could be slightly more structured, but no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters, no output schema, and no annotations, the description is adequate but lacks details on output format, sorting, or edge cases. Does not explain default limit behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description does not add additional parameter meaning beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it gets top trading partners for a country by trade value. Distinguishes from sibling tools like comtrade_trade_data (broader) and comtrade_top_commodities (different focus), but does not explicitly differentiate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage: understanding main trade relationships. No explicit when-to-use or when-not-to-use guidance, nor comparison with siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
comtrade_trade_dataAInspect
Get bilateral trade data between two countries (e.g., "840" for US, "156" for China). Returns trade values, quantities, and commodity details for imports and exports.
| Name | Required | Description | Default |
|---|---|---|---|
| flow | No | Trade flow: "M" for imports, "X" for exports. Optional — defaults to both "M,X". | |
| year | Yes | Trade year (e.g., "2024") | |
| hs_code | No | HS commodity code at 2/4/6 digit level (e.g., "8471" for computers). Optional — omit for all commodities. | |
| partner_code | Yes | ISO numeric country code for the partner country (e.g., "156" for China, "0" for World) | |
| reporter_code | Yes | ISO numeric country code for the reporting country (e.g., "842" for US, "156" for China) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It describes the tool as a data retrieval operation but does not mention any constraints like rate limits, data freshness, or whether it returns raw or aggregated data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is two sentences, concise and front-loaded with the core purpose. Every sentence adds value, but could include more guidance on when to use it without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description partially compensates by listing return fields (trade value, quantity, partner, commodity). However, it does not describe pagination, error handling, or data limits. Adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of parameters with descriptions. The tool description adds minimal extra meaning beyond the schema, but it mentions the default for flow ('M,X') which is not in the schema. Baseline 3, plus 1 for added default value info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves bilateral trade data between two countries from the UN Comtrade database, specifying return fields like trade value, quantity, partner, and commodity. However, it does not differentiate from siblings like comtrade_top_commodities or comtrade_top_partners.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for trade data between two countries, but does not explicitly state when to use this tool vs. alternatives such as comtrade_top_commodities or comtrade_top_partners. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the tool as a search/catalog discovery tool, which implies it is read-only and non-destructive. However, it does not disclose any behavioral traits such as whether it uses vector search, caching, or rate limits. A score of 3 is appropriate because the description covers basic behavior but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of three sentences with no wasted words. The key instructions are front-loaded: the first sentence states the purpose, the second indicates the return value, and the third gives a clear when-to-call directive.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is nearly complete. It explains what the tool does, what it returns, and when to use it. A minor gap is that it doesn't describe the format of the returned results (e.g., list of tool names and descriptions), but the description does state 'Returns the most relevant tools with names and descriptions,' which is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by explaining that the query parameter should be a 'natural language description' and gives examples, which goes beyond the schema's generic description. The limit parameter is also mentioned with default and max values in the schema, but the description does not add extra semantics beyond that. Overall, the description enhances understanding of the query parameter, warranting a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb-resource combination ('Search the Pipeworx tool catalog') and clearly distinguishes the tool from siblings by indicating it is to be called 'FIRST' when the agent has 500+ tools, which differentiates it from other tools that perform specific data retrieval tasks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly instructs the agent to call this tool first when many tools are available, and provides clear context ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'). This gives definitive when-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It discloses deletion action but omits details like reversibility, permissions needed, or side effects. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single short sentence with no fluff. Could be slightly more informative, but very concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with 1 param, no output schema, no nested objects. Description covers the basic purpose and parameter. Could add info about case sensitivity or persistence, but adequate for a straightforward deletion.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and schema already describes key as 'Memory key to delete'. Description adds no extra meaning beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (delete), resource (stored memory), and identifier (key). Distinguishes from siblings like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies use when you need to delete a memory by key, but no explicit guidance on when to use alternatives or when not to use this tool. Siblings provide context, but description doesn't leverage it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States it retrieves memory, but doesn't disclose if memory persists across sessions or any side effects. Adequate but not detailed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action and resource, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has one optional parameter and no output schema. Description covers the core usage well. Could mention return format or session persistence, but not critical given simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the single 'key' parameter. Description adds context ('omit to list all keys') which goes beyond schema description, but is minimal.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action ('retrieve') and the resource ('stored memory by key'), and distinguishes two modes (specific key vs. list all).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use ('to retrieve context you saved earlier') and hints at alternatives (omit key to list all). Does not mention when not to use it or contrast with sibling tools like 'forget' or 'remember'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses persistence behavior (authenticated vs anonymous) but does not mention size limits, overwrite behavior, or whether keys are case-sensitive. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each carrying distinct information: purpose, usage, and behavior. No filler. Could be slightly more structured but still concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple parameters, description covers purpose, usage, and persistence behavior adequately. Missing details like size limits or conflict resolution, but these are minor for a key-value store.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage, so baseline is 3. Description adds no additional parameter information beyond what schema already provides via examples and types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it stores a key-value pair in session memory. Verb 'store' and resource 'key-value pair' are specific, and the description distinguishes it from siblings 'forget' and 'recall' by focusing on saving data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description says 'use this to save intermediate findings, user preferences, or context across tool calls', providing clear use cases. It does not explicitly mention when not to use it, but the positive guidance is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!