videogames
Server Details
Videogames MCP — wraps Free-to-Play Games API (freetogame.com, free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-videogames
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 8 of 8 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but there is some overlap between filter_games and list_games, as both handle listing free-to-play games with filtering options, which could cause confusion. The memory tools (remember, recall, forget) are clearly distinct, and ask_pipeworx and discover_tools serve unique high-level functions.
Naming conventions are inconsistent across the tool set. Some tools use verb_noun patterns like filter_games, get_game, and list_games, while others use single verbs like ask_pipeworx, discover_tools, forget, recall, and remember. This mix of styles lacks a predictable pattern, making the set less coherent.
With 8 tools, the count is reasonable for a server focused on video games and general utilities. It covers core gaming operations and memory management without being overwhelming, though the inclusion of general tools like ask_pipeworx and discover_tools slightly broadens the scope beyond a pure video game server.
For the video game domain, the server provides good coverage for retrieving and filtering free-to-play games, but lacks update or delete operations for games, which limits CRUD completeness. The memory tools offer basic storage functionality, and ask_pipeworx and discover_tools add general-purpose capabilities, but there are notable gaps in game management beyond read operations.
Available Tools
8 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by explaining key behaviors: Pipeworx 'picks the right tool, fills the arguments, and returns the result.' It also implies this is a read-only operation (asking questions/getting answers) though doesn't explicitly state safety characteristics. Could benefit from mentioning rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly concise and well-structured: one sentence states the purpose, another explains the mechanism, a third provides usage guidance, and examples illustrate concrete applications. Every sentence earns its place with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no annotations and no output schema, the description provides excellent context about how the tool works and when to use it. The only gap is lack of information about return format or error handling, which would be helpful given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single 'question' parameter. The description adds some context about natural language questions and provides examples, but doesn't add significant semantic meaning beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('data source'), and distinguishes from siblings by emphasizing natural language input without needing to browse tools or learn schemas.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives (implicitly suggesting other tools require browsing or schema knowledge) and includes concrete examples to illustrate appropriate usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it searches by natural language query, returns relevant tools with names and descriptions, and has a default/max limit context. However, it lacks details on error handling, rate limits, or authentication needs, which are minor gaps for a read-only search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with two sentences that efficiently convey purpose and usage guidelines without any wasted words. Each sentence earns its place by providing critical information for agent decision-making.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with two parameters), no annotations, and no output schema, the description is mostly complete. It covers purpose, usage, and high-level behavior, but lacks details on output format (e.g., structure of returned tools) and error cases, which are minor omissions in this context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional parameter semantics beyond what the schema provides, such as examples or usage nuances, meeting the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resources ('tool catalog'), and explicitly distinguishes it from sibling tools by emphasizing its role in discovery among '500+ tools available' rather than filtering or listing specific games like the siblings do.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This clearly states when to use it (for discovery in large catalogs) and implies when not to use it (for simpler tasks with fewer tools or when specific tools are already known), offering a strong alternative context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
filter_gamesAInspect
Filter free-to-play games by tag (dot-separated combination of attributes). Returns matching games with title, short description, genre, platform, publisher, release date, and thumbnail.
| Name | Required | Description | Default |
|---|---|---|---|
| tag | Yes | Dot-separated tag filter, e.g. "3d.mmorpg.fantasy", "shooter.pvp", "browser.strategy" | |
| platform | No | Optional platform filter: "pc" or "browser" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but lacks behavioral details. It states the tool returns matching games with specific fields, but doesn't disclose pagination, rate limits, error handling, or whether it's a read-only operation (though implied by 'filter'). This leaves gaps for agent decision-making.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys purpose, parameters, and output. It's front-loaded with the core action and includes essential details without redundancy, making it highly concise and effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description partially compensates by specifying return fields, but lacks details on behavioral traits (e.g., safety, performance) and doesn't fully explain the output structure. It's adequate for a simple filter tool but has clear gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema (e.g., no extra context on tag format or platform usage), meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Filter free-to-play games by tag'), identifies the resource ('games'), and distinguishes from siblings by specifying it returns matching games with detailed attributes, unlike 'get_game' (likely single game) or 'list_games' (likely unfiltered list).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when filtering by tag is needed, but provides no explicit guidance on when to use this tool versus 'list_games' (e.g., for unfiltered listing) or 'get_game' (e.g., for single game details). It mentions 'free-to-play' scope but doesn't clarify if siblings have different scopes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. While 'Delete' implies a destructive mutation, it doesn't disclose whether deletion is permanent, requires specific permissions, affects related data, or provides confirmation feedback. For a destructive tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and resource, making it immediately scannable and appropriately sized for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with no annotations and no output schema, the description is insufficient. It doesn't address critical context like deletion permanence, error conditions, or what happens to the memory system post-deletion. Given the complexity and lack of structured coverage, more completeness is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'key' fully documented in the schema. The description adds no additional semantic context beyond what's in the schema (e.g., format examples, key constraints, or relationship to other tools). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and resource ('stored memory by key'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the verb 'Delete' strongly implies a destructive operation distinct from retrieval or storage functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'recall' (likely for retrieving memories) and 'remember' (likely for storing memories), there's no indication of prerequisites, appropriate contexts, or exclusion criteria for this deletion operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_gameAInspect
Get full details for a free-to-play game by its FreeToGame ID. Returns title, description, genre, platform, publisher, developer, release date, screenshots, and minimum system requirements.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | FreeToGame game ID (e.g. 452 for "Valorant") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns specific data fields (title, description, etc.), which adds useful context beyond the input schema. However, it lacks details on error handling, rate limits, authentication needs, or whether it's a read-only operation, leaving behavioral gaps for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose ('Get full details for a free-to-play game by its FreeToGame ID') and follows with essential return value information. Every part earns its place with no wasted words, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no nested objects), no annotations, and no output schema, the description does a good job by specifying the resource type, identification method, and return data. However, it could be more complete by mentioning error cases (e.g., invalid ID) or clarifying if it's read-only, slightly reducing the score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'id' parameter well-documented as 'FreeToGame game ID (e.g. 452 for "Valorant")'. The description adds no additional parameter information beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full details') and resource ('a free-to-play game by its FreeToGame ID'), distinguishing it from sibling tools like 'filter_games' and 'list_games' which likely handle multiple games. It specifies the exact type of game (free-to-play) and identification method (FreeToGame ID).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (to get detailed information for a specific free-to-play game identified by ID) versus alternatives like 'list_games' (likely for listing multiple games) or 'filter_games' (likely for searching/filtering). However, it does not explicitly state exclusions or name the sibling tools as alternatives, keeping it at a 4.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_gamesBInspect
List free-to-play games from FreeToGame. Optionally filter by platform and category, and sort results. Returns title, short description, game URL, genre, platform, publisher, release date, and thumbnail.
| Name | Required | Description | Default |
|---|---|---|---|
| sort_by | No | Sort order: "release-date", "popularity", "alphabetical", or "relevance" | |
| category | No | Genre/category filter, e.g. "mmorpg", "shooter", "strategy", "moba", "racing", "sports", "social", "sandbox", "open-world", "survival", "pvp", "pve", "pixel", "voxel", "zombie", "turn-based", "first-person", "third-person", "top-down", "tower-defense", "horror", "mmofps" | |
| platform | No | Platform filter: "pc", "browser", or "all" (default "all") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions the return data structure (title, description, URL, etc.) which is helpful, but doesn't disclose important behavioral traits like pagination, rate limits, authentication requirements, error conditions, or whether this is a read-only operation. The description provides some context but leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences that efficiently convey purpose and capabilities. It's front-loaded with the main function and follows with optional features and return data. No wasted words, though it could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic purpose and return data structure but lacks important contextual information about behavior, limitations, and relationship to sibling tools. For a listing tool with filtering capabilities, it's minimally adequate but has clear gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly with descriptions and examples. The description adds minimal value beyond what's in the schema, mentioning filtering and sorting but not providing additional semantic context. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'free-to-play games from FreeToGame' with optional filtering capabilities. It distinguishes from siblings by specifying it's a listing operation rather than filtering or getting individual games, though the distinction could be more explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through 'optionally filter by platform and category, and sort results' but doesn't explicitly state when to use this tool versus sibling tools like 'filter_games' or 'get_game'. No specific alternatives or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It describes the dual behavior (retrieve by key or list all) and mentions persistence across sessions, but doesn't disclose error handling, performance characteristics, or what happens when a key doesn't exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are front-loaded with the core functionality, with the second sentence providing essential context about session persistence. Every word earns its place with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema and no annotations, the description provides good coverage of purpose, usage, and parameter semantics. However, it doesn't describe the return format (what a 'memory' looks like) or error conditions, leaving some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, so the baseline is 3. The description adds value by explaining the semantic meaning of omitting the key parameter ('omit to list all keys'), which clarifies the tool's dual-purpose behavior beyond what the schema alone provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb ('retrieve' or 'list') and resource ('previously stored memory'), and distinguishes it from siblings like 'remember' (store) and 'forget' (delete). It explicitly covers both retrieval by key and listing all memories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use it ('retrieve context you saved earlier') and when to omit the key parameter ('omit key to list all stored memories'). It distinguishes this from other memory operations like 'remember' for storage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the persistence differences between authenticated users ('persistent memory') and anonymous sessions ('last 24 hours'), which are crucial for understanding data retention. It could improve by mentioning potential limitations like storage size or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with two sentences that efficiently convey purpose, usage, and behavioral details without any wasted words. Every sentence adds value, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and key behavioral traits like persistence. It could be more complete by specifying the return value or error conditions, but it adequately supports tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting both parameters ('key' and 'value') with examples. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (likely for retrieval) and 'forget' (likely for deletion). It explicitly identifies the tool's function beyond just restating the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), giving practical examples. However, it does not explicitly state when not to use it or name alternatives (e.g., 'recall' for retrieval), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!