crypto
Server Details
Crypto MCP — cryptocurrency prices and currency conversion
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-crypto
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 8 of 8 tools scored. Lowest: 2.9/5.
The tools have some clear distinctions, such as get_crypto_market vs. get_crypto_price, but there is significant overlap and ambiguity. For example, ask_pipeworx is a general-purpose tool that could potentially handle queries covered by other tools like get_exchange_rate, creating confusion about when to use which. The memory tools (remember, recall, forget) are distinct from crypto tools, but the overall set mixes unrelated domains, leading to potential misselection.
Naming conventions are inconsistent across the tool set. Some tools use verb_noun patterns (e.g., get_crypto_price, get_exchange_rate), while others use single verbs or phrases (e.g., ask_pipeworx, discover_tools, forget, recall, remember). There is no uniform style, with a mix of descriptive names and vague terms, making it harder for agents to predict tool purposes from their names alone.
With 8 tools, the count is reasonable, but it feels borderline due to the mixed scope. The server is named 'crypto', yet only 3 tools directly relate to cryptocurrency (get_crypto_market, get_crypto_price, get_exchange_rate), while others cover general utilities (ask_pipeworx, discover_tools) and memory management. This mismatch makes the tool count seem either too broad or insufficiently focused on the crypto domain.
For a server named 'crypto', the tool surface is significantly incomplete. It lacks essential operations such as historical price data, trading information, wallet management, or blockchain queries. The inclusion of unrelated tools like ask_pipeworx and memory functions does not fill these gaps. While the existing crypto tools cover basic price and market cap queries, they do not provide a comprehensive coverage for cryptocurrency-related tasks, leading to potential agent failures in more complex scenarios.
Available Tools
8 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a query tool that interprets natural language, selects and executes appropriate data sources, and returns results. However, it lacks details on limitations (e.g., data scope, accuracy, rate limits) or error handling, which would be needed for a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by supporting details and examples. Every sentence adds value: the first explains the tool's function, the second describes its automation benefits, and the third provides concrete examples. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (natural language processing and tool selection) and lack of annotations or output schema, the description does well by explaining the high-level behavior and providing examples. However, it doesn't cover potential limitations or the format of returned results, leaving some gaps in contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter ('question') as 'Your question or request in natural language.' The description adds minimal value beyond this, only reinforcing the natural language aspect without providing additional syntax or format details. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). It distinguishes itself from siblings by emphasizing natural language interaction versus direct tool invocation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives (implicitly, use other tools for direct access) and includes concrete examples ('What is the US trade deficit with China?', etc.) to illustrate appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a search operation (implying read-only, non-destructive), returns a list of tools with names and descriptions, and emphasizes its role as a discovery tool. However, it lacks details on error handling, rate limits, or authentication needs, which are common gaps for tools without annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and efficiently structured in two sentences: the first states the purpose and output, the second provides critical usage guidelines. Every sentence earns its place with no wasted words, making it highly concise and well-organized for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search function with 2 parameters) and lack of annotations or output schema, the description is largely complete. It covers purpose, usage context, and behavioral intent adequately. However, it does not explain the return format (e.g., structure of the tool list) or potential limitations, leaving minor gaps in full context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (query and limit) thoroughly. The description adds no additional parameter semantics beyond what the schema provides, such as examples or edge cases. This meets the baseline of 3, as the schema handles the heavy lifting without description enhancement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search', 'returns') and resources ('Pipeworx tool catalog', 'most relevant tools with names and descriptions'). It distinguishes itself from sibling tools like get_crypto_market by focusing on tool discovery rather than data retrieval, making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This includes a specific condition (500+ tools) and a clear alternative context (vs. not using it when tools are limited), offering strong usage direction without exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool deletes a memory, implying a destructive mutation, but lacks details on permissions needed, whether deletion is permanent or reversible, error handling (e.g., if key doesn't exist), or side effects. This is inadequate for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It is front-loaded with the core action ('Delete') and resource, making it immediately understandable without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's destructive nature (implied by 'Delete'), lack of annotations, and no output schema, the description is incomplete. It doesn't address critical context like what 'stored memory' entails, confirmation of deletion, return values, or error scenarios, which are essential for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the parameter 'key' fully documented in the schema. The description adds no additional meaning beyond what the schema provides (e.g., format examples or constraints), so it meets the baseline of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Delete') and resource ('a stored memory by key'), distinguishing it from sibling tools like 'recall' (likely for retrieval) and 'remember' (likely for storage). It directly communicates the tool's function without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While it implies deletion of stored memories, it doesn't specify prerequisites (e.g., existing memory with that key), exclusions, or comparisons to siblings like 'discover_tools' or 'recall', leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_crypto_marketBInspect
Get top cryptocurrencies ranked by market cap. Returns name, symbol, USD price, market cap, and 24h % change for each.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of coins (1-100, default 10) | |
| vs_currency | No | Quote currency (default: usd) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves data ('Get'), implying a read-only operation, but doesn't address potential rate limits, authentication needs, error conditions, or the format/scope of returned data (e.g., pagination, freshness). This is inadequate for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part of the sentence contributes meaning, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on usage guidelines, behavioral traits, and output format. Without annotations or an output schema, more context would be helpful, but it's not completely inadequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no parameter-specific information beyond what's in the input schema, which has 100% coverage. It mentions 'top cryptocurrencies by market cap,' which aligns with the 'limit' parameter's role, but doesn't explain how parameters interact or provide additional context. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get top cryptocurrencies by market cap with prices and 24h changes.' It specifies the verb ('Get'), resource ('top cryptocurrencies'), and key data returned. However, it doesn't explicitly differentiate from sibling tools like get_crypto_price or get_exchange_rate, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_crypto_price or get_exchange_rate. It doesn't mention any prerequisites, exclusions, or specific contexts for usage, leaving the agent to infer based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_crypto_priceAInspect
Get current price, market cap, and 24h change for a cryptocurrency. Specify coin ID (e.g., "bitcoin", "ethereum", "solana"). Returns USD price, market cap, and % change.
| Name | Required | Description | Default |
|---|---|---|---|
| coin_id | Yes | CoinGecko coin ID (e.g., bitcoin, ethereum, solana) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions data retrieval but lacks details on behavioral traits such as rate limits, error handling, authentication needs, or response format. This is a significant gap for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, consisting of two concise sentences that directly state the tool's purpose and usage without any wasted words or unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is somewhat complete but lacks behavioral context. It covers the basic purpose and parameter usage but does not address return values or operational constraints, leaving gaps in understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'coin_id' parameter. The description adds minimal value by providing examples (e.g., 'bitcoin', 'ethereum', 'solana'), but does not explain semantics beyond what the schema provides, aligning with the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resources ('current price, market cap, and 24h change for a cryptocurrency'), distinguishing it from sibling tools like 'get_crypto_market' and 'get_exchange_rate' by focusing on individual coin metrics rather than broader market or exchange data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying CoinGecko IDs, but it does not explicitly state when to use this tool versus alternatives like 'get_crypto_market' or 'get_exchange_rate', nor does it provide exclusions or prerequisites for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_exchange_rateCInspect
Convert between fiat currencies (e.g., USD to EUR). Returns conversion rate and timestamp.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target currency code (e.g., EUR, JPY, GBP) | |
| from | Yes | Source currency code (e.g., USD, EUR, GBP) | |
| amount | No | Amount to convert (default: 1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal information. It states what the tool does but doesn't cover aspects like rate limits, error handling, data sources, or response format. For a tool with no annotations, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with zero wasted words. It's appropriately sized and front-loaded, efficiently conveying the core purpose without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't address behavioral traits, return values, or error conditions, which are crucial for an agent to use the tool effectively. The description alone is insufficient for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no parameter-specific information beyond what's already in the input schema, which has 100% coverage. It implies parameters for source and target currencies but doesn't explain syntax, constraints, or usage details. With high schema coverage, the baseline score of 3 is appropriate as the schema handles most documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('exchange rate between two fiat currencies'), making it immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_crypto_market' or 'get_crypto_price', which handle cryptocurrency data rather than fiat currencies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools, prerequisites, or specific contexts where this tool is preferred, leaving the agent to infer usage based on the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool retrieves or lists memories stored earlier, which implies read-only behavior, but doesn't specify details like error handling (e.g., what happens if a key doesn't exist), session persistence mechanisms, or performance characteristics. It adds basic context but lacks depth on behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core functionality, and the second provides usage guidance. Every sentence earns its place with no redundant or vague language, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (one optional parameter) and lack of annotations or output schema, the description is mostly complete. It covers purpose, usage, and parameter semantics adequately. However, it could improve by mentioning return values or error cases, which would help compensate for the missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'key' parameter. The description adds value by explaining the semantics: omitting the key lists all memories, while providing it retrieves a specific memory. This clarifies the parameter's role beyond the schema's technical description, though it doesn't detail format or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from sibling tools by specifying it's for retrieving context saved earlier, unlike tools like 'remember' (likely for saving) or 'forget' (likely for deleting).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs. alternatives: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter ('omit key' to list all keys), offering clear operational context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool stores data in session memory, specifies persistence differences for authenticated vs. anonymous users ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), and implies it's a write operation. However, it does not cover potential limitations like storage size, rate limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the first sentence stating the core purpose and the second providing essential context. Every sentence adds value: the first defines the action, and the second clarifies persistence behavior. There is no wasted text, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (a write operation with no annotations and no output schema), the description is mostly complete. It covers purpose, usage, and key behavioral traits like persistence. However, it lacks details on return values or error handling, which would be beneficial since there is no output schema. It compensates well but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with both parameters ('key' and 'value') well-documented in the schema. The description adds minimal value beyond the schema, mentioning general use cases ('intermediate findings, user preferences, or context') but no specific syntax or format details. This meets the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (which likely retrieves) and 'forget' (which likely removes). It provides concrete examples of what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but does not explicitly state when not to use it or name alternatives. It implies usage for persistence across sessions but lacks explicit exclusions or comparisons to siblings like 'recall' or 'forget'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!