Skip to main content
Glama

gitlab_get_user_mr_comments

Retrieve all merge request comments authored by a specific user to track code review participation, analyze feedback quality, and assess team collaboration.

Instructions

Get all comments authored by a user on merge requests

Find all merge request comments and review feedback provided by the specified user, including code review discussions.

Returns MR comment information with:

  • Comment details: content, type (review/discussion)

  • MR context: title, state, author, project

  • Review info: approval status, code line references

  • Thread info: discussion flow, resolution status

  • Impact: influence on code quality and decisions

Use cases:

  • Code review participation tracking

  • Quality assurance monitoring

  • Mentoring and feedback analysis

  • Team collaboration assessment

Parameters:

  • user_id: Numeric user ID

  • username: Username string (use either user_id or username)

  • project_id: Optional project scope filter

  • comment_type: Filter by type (review, discussion, all)

  • since: Comments after date (YYYY-MM-DD)

  • until: Comments before date (YYYY-MM-DD)

  • mr_state: Filter by MR state (opened, merged, closed, all)

  • sort: Sort order (created, updated, project)

  • per_page: Results per page (default: 20)

  • page: Page number (default: 1)

Example: Get code review comments from last month

{
  "username": "johndoe",
  "comment_type": "review",
  "since": "2024-01-01",
  "until": "2024-01-31"
}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
usernameYesUsername string
project_idNoOptional project scope filter
sinceNoComments after date (YYYY-MM-DD)
untilNoComments before date (YYYY-MM-DD)
per_pageNoNumber of results per page Type: integer Range: 1-100 Default: 20 Example: 50 (for faster browsing) Tip: Use smaller values (10-20) for detailed operations, larger (50-100) for listing
pageNoPage number for pagination Type: integer Range: ≥1 Default: 1 Example: 3 (to get the third page of results) Note: Use with per_page to navigate large result sets

Implementation Reference

  • The core handler function that implements the tool logic: validates username, extracts optional parameters (project_id, date ranges, pagination), and delegates to GitLabClient.get_user_mr_comments()
    def handle_get_user_mr_comments(client: GitLabClient, arguments: Optional[Dict[str, Any]]) -> Dict[str, Any]:
        """Handle getting user's MR comments"""
        username = get_argument(arguments, "username")
        if not username:
            raise ValueError("username is required")
        
        project_id = get_argument(arguments, "project_id")
        since = get_argument(arguments, "since")
        until = get_argument(arguments, "until")
        per_page = get_argument(arguments, "per_page", DEFAULT_PAGE_SIZE)
        page = get_argument(arguments, "page", 1)
        
        return client.get_user_mr_comments(
            username=username,
            project_id=project_id,
            since=since,
            until=until,
            per_page=per_page,
            page=page
        )
  • Input schema definition for the gitlab_get_user_mr_comments tool, specifying required 'username' and optional parameters for filtering and pagination.
        name=TOOL_GET_USER_MR_COMMENTS,
        description=desc.DESC_GET_USER_MR_COMMENTS,
        inputSchema={
            "type": "object",
            "properties": {
                "username": {"type": "string", "description": "Username string"},
                "project_id": {"type": "string", "description": "Optional project scope filter"},
                "since": {"type": "string", "description": "Comments after date (YYYY-MM-DD)"},
                "until": {"type": "string", "description": "Comments before date (YYYY-MM-DD)"},
                "per_page": {"type": "integer", "description": desc.DESC_PER_PAGE, "default": DEFAULT_PAGE_SIZE, "minimum": 1, "maximum": MAX_PAGE_SIZE},
                "page": {"type": "integer", "description": desc.DESC_PAGE_NUMBER, "default": 1, "minimum": 1}
            },
            "required": ["username"]
        }
    ),
  • Registration of the tool handler in the global TOOL_HANDLERS dictionary, which maps tool names to their handler functions for use in MCP server.call_tool()
    TOOL_GET_USER_ISSUE_COMMENTS: handle_get_user_issue_comments,
    TOOL_GET_USER_MR_COMMENTS: handle_get_user_mr_comments,
    TOOL_GET_USER_DISCUSSION_THREADS: handle_get_user_discussion_threads,
    TOOL_GET_USER_RESOLVED_THREADS: handle_get_user_resolved_threads,
  • Constant defining the exact tool name string used throughout the codebase for consistency.
    TOOL_GET_USER_MR_COMMENTS = "gitlab_get_user_mr_comments"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes what the tool returns (e.g., 'MR comment information' with details like content, type, MR context) and mentions pagination via 'per_page' and 'page' parameters. However, it lacks details on behavioral traits such as rate limits, authentication needs, error handling, or whether it's a read-only operation (though implied by 'Get'). This leaves gaps in transparency for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections: purpose, returns, use cases, parameters, and an example. It is appropriately sized and front-loaded with the core purpose. However, the 'Returns' section is somewhat verbose with bullet points that could be condensed, and the 'Use cases' might be redundant if the purpose is already clear, slightly reducing efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides good context on what the tool does and its parameters. However, it lacks details on output format (only described in bullet points without schema), error conditions, or performance implications (e.g., pagination limits). For a tool with 6 parameters and complex filtering, this leaves some gaps in completeness for an agent to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents parameters well. The description adds value by listing all parameters with brief semantics (e.g., 'Filter by type (review, discussion, all)') and provides an example that clarifies usage. It compensates for the schema's lack of enums by explaining options like 'comment_type' and 'mr_state,' enhancing parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get all comments authored by a user on merge requests' with additional context about including 'code review discussions.' It specifies the resource (comments on merge requests) and action (get/find). However, it does not explicitly differentiate from sibling tools like 'gitlab_get_user_issue_comments' or 'gitlab_get_merge_request_notes,' which reduces clarity in distinguishing usage scenarios.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides 'Use cases' (e.g., 'Code review participation tracking') which imply when to use this tool, but it does not explicitly state when not to use it or name alternatives among sibling tools. For example, it doesn't clarify if this should be used instead of 'gitlab_get_merge_request_notes' for user-specific comments. This leaves usage context somewhat implied rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Vijay-Duke/mcp-gitlab'

If you have feedback or need assistance with the MCP directory API, please join our Discord server