Skip to main content
Glama
hmumixaM

USCardForum MCP Server

by hmumixaM

get_user_replies

Fetch user replies across topics to analyze contributions, find data points, and evaluate participation quality in the USCardForum community.

Instructions

Fetch replies/posts made by a user in other topics.

Args:
    username: The user's handle
    offset: Pagination offset (0, 30, 60, ...)

Returns a list of UserAction objects with:
- topic_id: Which topic they replied to
- post_number: Their post number in that topic
- title: Topic title
- excerpt: Preview of their reply
- created_at: When they replied

Use this to:
- See a user's contributions across topics
- Find their data points and experiences
- Evaluate the quality of their participation

Paginate with offset in increments of 30.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
usernameYesThe user's handle
offsetNoPagination offset (0, 30, 60, ...)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • MCP tool handler for get_user_replies, decorated with @mcp.tool(). Defines input schema via Annotated[Field] and output as list[UserAction]. Delegates to shared client instance.
    @mcp.tool()
    def get_user_replies(
        username: Annotated[
            str,
            Field(description="The user's handle"),
        ],
        offset: Annotated[
            int | None,
            Field(default=None, description="Pagination offset (0, 30, 60, ...)"),
        ] = None,
    ) -> list[UserAction]:
        """
        Fetch replies/posts made by a user in other topics.
    
        Args:
            username: The user's handle
            offset: Pagination offset (0, 30, 60, ...)
    
        Returns a list of UserAction objects with:
        - topic_id: Which topic they replied to
        - post_number: Their post number in that topic
        - title: Topic title
        - excerpt: Preview of their reply
        - created_at: When they replied
    
        Use this to:
        - See a user's contributions across topics
        - Find their data points and experiences
        - Evaluate the quality of their participation
    
        Paginate with offset in increments of 30.
        """
        return get_client().get_user_replies(username, offset=offset)
  • DiscourseClient wrapper method that delegates to UsersAPI.get_user_replies.
    def get_user_replies(
        self,
        username: str,
        offset: int | None = None,
    ) -> list[UserAction]:
        """Fetch user's replies.
    
        Args:
            username: User handle
            offset: Optional pagination offset
    
        Returns:
            List of reply action objects
        """
        return self._users.get_user_replies(username, offset=offset)
  • UsersAPI implementation that fetches user actions with filter=5 (replies) from /user_actions.json endpoint.
    def get_user_replies(
        self,
        username: str,
        offset: int | None = None,
    ) -> list[UserAction]:
        """Fetch user's replies.
    
        Args:
            username: User handle
            offset: Optional pagination offset
    
        Returns:
            List of reply action objects
        """
        return self.get_user_actions(username, filter=5, offset=offset)
  • Imports the get_user_replies tool from users.py into the server_tools package, making it available for higher-level imports.
    from .users import (
        get_user_summary,
        get_user_topics,
        get_user_replies,
        get_user_actions,
        get_user_badges,
        get_user_following,
        get_user_followers,
        get_user_reactions,
        list_users_with_badge,
    )
    
    # =============================================================================
    # 🔐 Auth — Authenticated actions (requires login)
    # =============================================================================
    from .auth import (
        login,
        get_current_session,
        get_notifications,
        bookmark_post,
        subscribe_topic,
    )
    
    # =============================================================================
    # Prompts & Resources
    # =============================================================================
    from .prompts import analyze_user, compare_cards, find_data_points, research_topic
    from .resources import resource_categories, resource_hot_topics, resource_new_topics
    
    
    __all__ = [
        # 📰 Discovery
        "get_hot_topics",
        "get_new_topics",
        "get_top_topics",
        "search_forum",
        "get_categories",
        # 📖 Reading
        "get_topic_info",
        "get_topic_posts",
        "get_all_topic_posts",
        # 👤 Users
        "get_user_summary",
        "get_user_topics",
        "get_user_replies",
        "get_user_actions",
        "get_user_badges",
        "get_user_following",
        "get_user_followers",
  • Imports and re-exports get_user_replies from server_tools in the main server entrypoint.
        get_user_replies,
        get_user_summary,
        get_user_topics,
        list_users_with_badge,
        login,
        research_topic,
        resource_categories,
        resource_hot_topics,
        resource_new_topics,
        search_forum,
        subscribe_topic,
    )
    
    __all__ = [
        "MCP_HOST",
        "MCP_PORT",
        "MCP_TRANSPORT",
        "NITAN_TOKEN",
        "SERVER_INSTRUCTIONS",
        "get_client",
        "main",
        "mcp",
        "analyze_user",
        "bookmark_post",
        "compare_cards",
        "find_data_points",
        "get_all_topic_posts",
        "get_categories",
        "get_current_session",
        "get_hot_topics",
        "get_new_topics",
        "get_notifications",
        "get_top_topics",
        "get_topic_info",
        "get_topic_posts",
        "get_user_actions",
        "get_user_badges",
        "get_user_followers",
        "get_user_following",
        "get_user_reactions",
        "get_user_replies",
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behaviors: it describes pagination mechanics ('Paginate with offset in increments of 30'), specifies the return format (list of UserAction objects with fields), and implies read-only nature through 'Fetch'. It doesn't mention rate limits or auth needs, but covers essential operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized, with clear sections (purpose, args, returns, usage, pagination) and no wasted sentences. Each part adds value, such as explaining the return structure and providing usage examples, making it easy to scan and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, 100% schema coverage, and presence of an output schema (implied by 'Returns a list of UserAction objects'), the description is complete enough. It covers purpose, parameters, return values, usage scenarios, and pagination behavior, leaving no critical gaps for an AI agent to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters fully. The description repeats parameter info (e.g., 'offset: Pagination offset (0, 30, 60, ...)') without adding significant meaning beyond the schema, such as explaining why increments of 30 are used or constraints on username format. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Fetch') and resource ('replies/posts made by a user in other topics'), distinguishing it from siblings like get_user_topics (user's own topics) and get_user_actions (broader actions). It precisely identifies what is being retrieved and from where.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context with 'Use this to:' examples (e.g., 'See a user's contributions across topics'), which helps understand when to apply this tool. However, it doesn't explicitly state when not to use it or name specific alternatives among siblings, such as get_user_actions, which might overlap in functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hmumixaM/uscardforum-mcp4'

If you have feedback or need assistance with the MCP directory API, please join our Discord server