Gitlab
Server Details
GitLab MCP — wraps the GitLab REST API v4 (BYO API key)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-gitlab
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 10 of 10 tools scored.
The set mixes GitLab-specific tools (gitlab_get_file, etc.) with generic memory and tool discovery tools. The memory tools (remember, recall, forget) and discovery tools (ask_pipeworx, discover_tools) serve completely different purposes from the GitLab tools, causing confusion about the server's actual domain.
GitLab tools use a consistent gitlab_verb_noun pattern, but the other tools (ask_pipeworx, discover_tools, remember, recall, forget) break the pattern entirely. They use imperative verbs without a namespace, creating a mix of conventions.
10 tools is a reasonable number, but the server appears to combine two separate concerns: GitLab operations (5 tools) and a general-purpose memory/discovery system (5 tools). This split makes the count feel padded for the GitLab domain.
For GitLab, only basic read operations (get file, get project, list issues, list MRs, list projects) are provided, with no create/update/delete operations. The memory and discovery tools are unrelated, leaving significant gaps in GitLab workflow coverage.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It discloses that the tool selects the best data source, fills arguments, and returns results, indicating automated orchestration. This is sufficient for a high-level question-answering tool, though it does not detail specific behaviors like error handling or latency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4 sentences) and front-loaded with the purpose. Each sentence adds value: purpose, behavior, and examples. Slightly verbose due to examples, but effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one param, no output schema, no nested objects), the description is complete enough. It explains the tool's role as an orchestrator and provides examples. No output schema exists, but description does not need to explain return values as it is a generic Q&A tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents the single parameter. The description adds context that the question should be in natural language and provides examples, but does not add structural details beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts a plain English question and returns an answer from the best data source. It explicitly distinguishes itself from other tools by acting as an orchestrator that selects tools and fills arguments, contrasting with sibling tools that perform specific tasks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use this tool: when you have a natural language question and want the system to handle tool selection. It provides examples to illustrate usage. However, it does not explicitly state when not to use it or mention alternatives, but the examples and context imply it is the default for questions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that it returns the most relevant tools with names and descriptions, and uses natural language input. However, no annotations are provided; description covers behavioral traits well, but could mention if it only returns top matches or has pagination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key action and result, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (2 params, no nested objects, no output schema), the description is mostly complete. Could mention if results are ranked by relevance, but sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds context on natural language usage but does not add meaning beyond schema for 'limit' or 'query'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action (search) and resource (tool catalog), and distinguishes itself from siblings by specifying it is for finding tools among 500+ options.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to call this FIRST when many tools are available, and provides context on when to use it (finding right tools) versus other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It states it deletes a memory by key, implying irreversibility, but doesn't specify if it's idempotent, what happens if key doesn't exist, or if there are any confirmations. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very concise, one short sentence. No unnecessary words. However, could include a bit more context without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 param, no output schema, no annotations), the description is minimally complete. It states the action and the required parameter. No extra context about return values or side effects is provided, but the tool is simple.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description adds no additional meaning beyond the schema's description of 'Memory key to delete'. Baseline 3 is appropriate since schema already documents the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb 'Delete' and a specific resource 'a stored memory by key'. It distinguishes from sibling tools like 'remember' and 'recall' by indicating a write/destructive action, but does not explicitly differentiate from all siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'remember' or 'recall'. There is no mention of prerequisites, side effects, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gitlab_get_fileBInspect
Fetch file content from a GitLab repository by project ID and file path (e.g., "src/main.py"). Returns decoded content, file size, name, and encoding.
| Name | Required | Description | Default |
|---|---|---|---|
| ref | No | Branch, tag, or commit SHA (default: default branch) | |
| _apiKey | Yes | GitLab personal access token | |
| file_path | Yes | Path to the file within the repository | |
| project_id | Yes | Project ID or URL-encoded path |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that content is base64-encoded and lists returned fields, but with no annotations present, it does not cover potential side effects, authentication requirements (beyond the API key parameter), or rate limits. A 3 is appropriate given the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences and efficiently states purpose and return values. Slight room for improvement by front-loading the base64 decoding hint, but still concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there is no output schema, the description partially fills the gap by listing return fields, but it omits behavioral details like pagination or error cases. It is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description adds no parameter-specific context beyond what the schema already provides. Baseline 3 is correct since the schema is self-sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb-resource pair ('Get a file from a GitLab repository') and enumerates return values (content, name, size, encoding), clearly distinguishing it from siblings like gitlab_get_project or gitlab_list_projects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., gitlab_get_project for project metadata), nor are there any when-not-to-use or prerequisite instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gitlab_get_projectAInspect
Get details for a specific GitLab project (e.g., project ID "123" or path "group/project"). Returns name, description, visibility, stars, forks, and default branch.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Project ID (numeric) or URL-encoded path (e.g., "group%2Fproject") | |
| _apiKey | Yes | GitLab personal access token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. It discloses that the tool returns full project details but does not mention side effects, authentication requirements (beyond the API key param), or rate limits. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, efficient and front-loaded with the core action. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a simple retrieval with no output schema, the description covers the main purpose and return types. However, it could mention that the tool is read-only, which is implied but not stated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description mentions 'ID or URL-encoded path' but does not add significant meaning beyond the schema's description of the id parameter. No additional value for the _apiKey parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a single GitLab project by ID or URL-encoded path, and lists the types of details returned. This is specific and distinct from sibling tools like gitlab_list_projects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does but does not provide guidance on when to use it versus alternatives like gitlab_list_projects. No explicit when-not or exclusion criteria are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gitlab_list_issuesBInspect
Search issues in a GitLab project by project ID. Returns issue ID, title, state (open/closed), labels, assignee, and URL. Filter by status and labels.
| Name | Required | Description | Default |
|---|---|---|---|
| state | No | Filter by state: "opened", "closed", or "all" (default: "opened") | |
| search | No | Search issues by title or description | |
| _apiKey | Yes | GitLab personal access token | |
| per_page | No | Number of issues to return (default 20, max 100) | |
| project_id | Yes | Project ID or URL-encoded path |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description notes that it returns specific fields (IID, title, state, labels, assignee, URL), but does not mention pagination behavior, rate limits, or that the API requires authentication (though _apiKey parameter covers that). No annotations are provided, so description carries full burden, but it is adequate for a read-only list operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences) and front-loaded with the main action. It provides useful information about returned fields without unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no output schema, the description is fairly complete. It explains what the tool does and what it returns. However, it could mention pagination (per_page parameter) and default filtering by state (opened), but those are in the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, so the description's lack of parameter details is compensated. The description lists returned fields but not parameter specifics, which is acceptable given the schema richness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List issues') and the resource ('in a GitLab project'), and lists the returned fields (IID, title, state, labels, assignee, URL). This distinguishes it from siblings like 'gitlab_list_mrs' which list merge requests, not issues.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., when to use search vs listing all, or when filtering by state). It does not mention any prerequisites like needing to know the project ID beforehand.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gitlab_list_mrsAInspect
List merge requests in a GitLab project by project ID. Returns MR ID, title, state, author, source/target branches, and URL. Filter by state and author.
| Name | Required | Description | Default |
|---|---|---|---|
| state | No | Filter by state: "opened", "closed", "merged", or "all" (default: "opened") | |
| _apiKey | Yes | GitLab personal access token | |
| per_page | No | Number of merge requests to return (default 20, max 100) | |
| project_id | Yes | Project ID or URL-encoded path |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It mentions returned fields but does not disclose pagination behavior beyond schema (per_page), rate limits, or authentication details. Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, concise and front-loaded with action and resource. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with no output schema, description provides key fields returned. However, lacks details like sorting, filtering beyond state, and default behavior. Adequate for simple use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in schema. Description adds value by listing returned fields but no additional parameter details. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'List' and resource 'merge requests in a GitLab project', and lists the returned fields, distinguishing it from siblings like gitlab_list_issues and gitlab_list_projects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this vs alternatives, though the description implies listing MRs with filtering by state. No mention of when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gitlab_list_projectsBInspect
List all accessible GitLab projects. Returns project ID, name, path, description, star count, and URL. Use gitlab_get_project to fetch detailed info.
| Name | Required | Description | Default |
|---|---|---|---|
| owned | No | If true, only return projects owned by the user (default: false) | |
| search | No | Search projects by name | |
| _apiKey | Yes | GitLab personal access token | |
| per_page | No | Number of projects to return (default 20, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must cover behavioral traits. It states the tool lists projects accessible to the user, which implies it is a read-only operation. However, it does not disclose pagination behavior beyond the per_page parameter, rate limits, or the fact that it uses the authenticated user's token. The description adds value by listing returned fields, but could be more explicit about the readonly nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the primary action and resource. It lists key return fields efficiently. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters with full schema coverage, and no output schema, the description is adequate but could benefit from stating that results are paginated and that the user must have appropriate GitLab access. It lists returned fields, which is helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description does not add any additional parameter meaning beyond what is in the schema. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'GitLab projects', and mentions the user context ('accessible to the authenticated user'). It distinguishes from siblings like 'gitlab_get_project' by indicating it returns a list, but does not explicitly differentiate from 'gitlab_list_issues' or 'gitlab_list_mrs', which are separate resource types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing to list projects for the authenticated user, but provides no guidance on when not to use it or alternatives. No explicit comparison with siblings is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It indicates read-only behavior and session persistence, but doesn't mention whether memories persist across sessions, limits, or thread safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, no wasted words, front-loaded with the core action, and immediately useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (one param, no output schema), the description covers retrieval and listing. It could mention what happens if key doesn't exist, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter described. The description adds that omitting key lists all keys, which is not in schema, but adds no extra semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'retrieve' and the resource 'stored memory', with explicit behavior for key omission. It distinguishes itself from sibling 'remember' and 'forget' by focusing on retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly says to use when retrieving saved context and omits mention of alternatives. However, it does not explicitly state when not to use it or compare to siblings like 'discover_tools'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. Discloses persistence behavior (authenticated persistent, anonymous 24h). Does not mention idempotency or overwrite behavior on duplicate keys.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each adds value: purpose, usage context, persistence details. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with two string params and no output schema. Description is complete for typical use. Could mention that values are mutable or if there is a size limit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good descriptions for both parameters. Description adds usage examples for keys (e.g., subject_property) and clarifies value can be any text. Adds meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'store a key-value pair' with specific usage context (session memory, intermediate findings, user preferences, context across tool calls). Distinguishes from sibling 'recall' and 'forget' by focusing on storage.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage contexts (save findings, preferences, context). Mentions persistence differences for authenticated vs anonymous users. Does not explicitly say when not to use it, but sibling context makes it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!