Gitlab Public
Server Details
GitLab Public MCP — wraps the GitLab REST API v4 (public endpoints, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-gitlab-public
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 8 of 8 tools scored.
The set includes a general-purpose 'ask_pipeworx' tool that overlaps with other tools like 'search_issues' and 'search_projects', as it claims to pick the right tool for plain English queries. Additionally, memory tools (forget, recall, remember) serve a different purpose but are mixed in with GitLab-specific tools, creating ambiguity.
Most GitLab tools use verb_noun pattern (e.g., 'get_project', 'search_issues'), but the memory tools ('forget', 'recall', 'remember') use plain verbs, and 'ask_pipeworx' and 'discover_tools' follow a different style. This mix of conventions reduces consistency.
With 8 tools, the count is reasonable and not excessive. The tools cover GitLab operations and memory management, which is appropriate for the server's purpose. Slightly over the typical range but still well-scoped.
The server provides basic search and retrieval for GitLab projects and issues, but lacks CRUD operations like creating or updating projects/issues. The memory tools add auxiliary functionality but do not fill the gaps in GitLab workflow completeness.
Available Tools
8 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool 'picks the right tool, fills the arguments, and returns the result,' indicating autonomous selection and invocation. It also states that no browsing or schema learning is needed, setting expectations about the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with four sentences that are front-loaded. The first sentence states the core purpose, followed by behavioral details and examples. No extraneous information; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema, no annotations), the description is complete enough. It explains what the tool does, how it works, and gives examples. There is no missing critical information for an agent to use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'question' described as 'Your question or request in natural language.' The description adds context by noting the parameter should be plain English and provides examples, but the schema already covers the parameter adequately, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the action (ask), resource (best available data source), and differentiates from sibling tools that are more specific (e.g., search_issues, search_projects).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly tells when to use this tool: when you have a question and want the system to pick the right tool. It does not explicitly say when not to use it or list alternatives, but it provides examples that illustrate typical use cases, making usage clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It explains the tool searches and returns relevant tools with names and descriptions, but does not detail behaviors like whether the search is fuzzy, case-sensitive, or how results are ranked. It also doesn't mention any side effects or limitations (e.g., if the tool is read-only).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each serving a purpose: explaining the tool, stating output, and giving usage guidance. Could be slightly more concise by merging first two sentences, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains what is returned ('most relevant tools with names and descriptions'), which is sufficient for a search tool. Parameter details are well-covered by schema. The description does not mention pagination or error handling, but for a simple search tool with only two parameters, this is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds meaning by explaining the query parameter expects a 'natural language description', which goes beyond the schema's generic description. The limit parameter is not elaborated in the description, but the schema already specifies default and max, so this is acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('search', 'returns', 'call this FIRST') and clearly identifies the resource ('Pipeworx tool catalog'). It distinguishes the tool from siblings by stating it is for finding tools when 500+ are available, which no other sibling does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('call this FIRST') and implies when not to (after tools are found). Provides context of 500+ tools, guiding the agent to prioritize this tool before others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It states deletion by key but omits whether the operation is reversible, requires confirmation, or has side effects (e.g., cascading deletions). The description is too minimal for a destructive action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that is front-loaded with the action verb and resource. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema, no annotations), the description is adequate but lacks behavioral details. It minimally covers the essentials but does not fully inform the agent about potential consequences.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with one required parameter 'key' described as 'Memory key to delete'. The description adds no additional meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Delete), the resource (a stored memory), and the means (by key). It directly conveys the tool's function without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when you need to remove a specific memory, but it does not explicitly state when not to use it or compare it to sibling tools like 'remember' or 'recall'. However, the purpose is clear enough to infer basic usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_projectAInspect
Get full details for a public GitLab project by ID or path (e.g., "gitlab-org%2Fgitlab"). Returns name, description, stars, forks, default branch, topics, and last activity date.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Project numeric ID or URL-encoded path (e.g., "gitlab-org%2Fgitlab") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must carry full weight. It discloses that the tool is for public projects only and returns specific fields, but does not mention read-only behavior, rate limits, or error conditions. With no annotations, a score of 3 is appropriate for a simple get operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise and front-loaded with the main purpose. Every phrase adds value: input formats, return fields, and scope. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given only one parameter, no output schema, and no annotations, the description is complete enough for the agent to select and invoke correctly. It covers input format, scope (public only), and expected return fields. However, it could be improved by noting that the tool is read-only.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (one parameter with description), so baseline is 3. The description adds the example of URL-encoded path and clarifies accepted formats, which adds some value beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets a public GitLab project, specifies the two accepted input formats (numeric ID or URL-encoded path), and lists the types of details returned. It uses specific verbs and resources, and the sibling tools do not overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (to fetch full project details by ID or path) but does not provide explicit guidance on when not to use or alternative tools. No exclusions or comparisons to siblings are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must cover behavioral traits. It states the tool retrieves or lists memories, which is transparent. However, it doesn't mention side effects (none expected) or performance implications, but given the simplicity, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action and optional behavior. Efficient and clear, no filler. Could be slightly more concise by merging sentences, but it's well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one optional parameter and no output schema, the description is complete. It explains both modes of operation and the purpose. No need for more detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description adds clarity: 'omit to list all keys' explains the behavior when key is absent. However, the schema already describes the key parameter, so the description adds limited additional value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It distinguishes from siblings like 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions when to use: to retrieve context saved earlier. Does not mention when not to use or alternatives, but the context is clear and the tool's role is well-defined among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behavioral traits: memory persistence differences between authenticated and anonymous users. Since no annotations are provided, the description carries the full burden, and it does well by clarifying that anonymous sessions expire in 24 hours. However, it doesn't mention potential side effects like overwriting existing keys, but this is implicit in a key-value store.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each adding value: purpose, usage examples, and persistence context. No wasted words, front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 string params, no output schema), the description is complete. It covers what the tool does, when to use it, and behavioral nuances (persistence). No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters well. The description adds context like example values and typical uses ('findings, addresses, preferences, notes'), which is helpful but not essential. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Store a key-value pair in your session memory', specifying the verb (store) and resource (key-value pair in session memory). It distinguishes from siblings like 'recall' (retrieve) and 'forget' (remove), which are different operations on similar resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use: 'to save intermediate findings, user preferences, or context across tool calls.' It also distinguishes use cases by persistence: 'Authenticated users get persistent memory; anonymous sessions last 24 hours.' This helps decide when to use this vs. other memory tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_issuesAInspect
Search issues across public GitLab projects by keyword. Returns issue title, state, author, labels, project ID, and direct URL.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (default 10, max 100) | |
| query | Yes | Search query for issue titles and descriptions |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It clearly states the scope ('across all public GitLab projects') and lists return fields. However, it does not mention pagination, ordering, or rate limits, which are relevant for search operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states purpose and scope, second lists return fields. No fluff, front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and moderate complexity (search with limit), the description is nearly complete. Missing only minor behavioral details like default ordering.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add parameter details beyond the schema, but the schema already describes both parameters adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action ('Search issues'), the target ('across all public GitLab projects'), and the specific fields returned ('title, state, author, labels, project ID, and URL'). This verb+resource+scope combination fully distinguishes it from sibling tools like search_projects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for searching issues but provides no explicit guidance on when to use this tool versus alternatives (e.g., search_projects). No 'when not to use' or specific context given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_projectsAInspect
Search public GitLab projects by keyword, sorted by popularity. Returns project ID, name, description, star count, fork count, open issues, and web URL.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (default 10, max 100) | |
| query | Yes | Search query string |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It states ordering by star count and returns specific fields, but lacks disclosure about pagination, rate limits, or whether results are limited to public projects only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with clear action, scope, and output fields. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description compensates by listing return fields. Tool is simple with only 2 parameters, so description is adequate. Minor gap: no mention of public-only nature or sorting order beyond 'star count'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description does not add additional semantics beyond what schema provides for parameters, except implied use of query for keyword search.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it searches public GitLab projects by keyword, ordered by star count. It also lists the returned fields, distinguishing it from siblings like search_issues and get_project.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for keyword-based project search, but does not explicitly say when to use this versus alternatives like get_project for single project retrieval or search_issues for issue search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!