Skip to main content
Glama
raidenrock

USCardForum MCP Server

by raidenrock

get_user_reactions

Retrieve a user's post reactions to identify their interests and values within the USCardForum community.

Instructions

Fetch a user's post reactions (likes, etc.).

Args:
    username: The user's handle
    offset: Pagination offset (optional)

Returns a UserReactions object with reaction data.

Use to see what content a user has reacted to,
which can indicate their interests and values.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
usernameYesThe user's handle
offsetNoPagination offset

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
reactionsNoReaction data

Implementation Reference

  • MCP tool handler for 'get_user_reactions' decorated with @mcp.tool(). Defines input parameters with descriptions and output type UserReactions. Delegates execution to the DiscourseClient.
    @mcp.tool()
    def get_user_reactions(
        username: Annotated[
            str,
            Field(description="The user's handle"),
        ],
        offset: Annotated[
            int | None,
            Field(default=None, description="Pagination offset"),
        ] = None,
    ) -> UserReactions:
        """
        Fetch a user's post reactions (likes, etc.).
    
        Args:
            username: The user's handle
            offset: Pagination offset (optional)
    
        Returns a UserReactions object with reaction data.
    
        Use to see what content a user has reacted to,
        which can indicate their interests and values.
        """
        return get_client().get_user_reactions(username, offset=offset)
  • Pydantic BaseModel defining the output schema for get_user_reactions tool, containing a list of reactions.
    class UserReactions(BaseModel):
        """User's post reactions."""
    
        reactions: list[Any] = Field(default_factory=list, description="Reaction data")
    
        class Config:
            extra = "ignore"
  • Client wrapper method that forwards get_user_reactions call to the UsersAPI instance.
    def get_user_reactions(
        self,
        username: str,
        offset: int | None = None,
    ) -> UserReactions:
        """Fetch user's post reactions.
    
        Args:
            username: User handle
            offset: Optional pagination offset
    
        Returns:
            User reactions data
        """
        return self._users.get_user_reactions(username, offset=offset)
  • Low-level API implementation that performs HTTP GET to fetch user reactions from the Discourse plugin endpoint and parses into UserReactions model.
    def get_user_reactions(
        self,
        username: str,
        offset: int | None = None,
    ) -> UserReactions:
        """Fetch user's post reactions.
    
        Args:
            username: User handle
            offset: Optional pagination offset
    
        Returns:
            User reactions data
        """
        params_list: list[tuple[str, Any]] = [("username", username)]
        if offset is not None:
            params_list.append(("offset", int(offset)))
    
        payload = self._get(
            "/discourse-reactions/posts/reactions.json",
            params=params_list,
        )
        return UserReactions(reactions=payload.get("reactions", []))
  • Tool function re-exported in server_tools package __all__ for import in server.py and MCP registration.
    "get_user_reactions",
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool fetches data (implied read-only) and mentions pagination via the offset parameter, but lacks details on rate limits, authentication needs, error handling, or what specific data is included in the UserReactions object. The description adds some behavioral context but is incomplete for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized, with a clear purpose statement, parameter details, return value, and usage context in separate sentences. It is front-loaded with the main action. Minor redundancy in parameter descriptions slightly reduces efficiency, but overall it is concise and effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is an output schema (implied by 'Returns a UserReactions object'), the description does not need to explain return values in detail. It covers the tool's purpose, parameters, and usage context adequately. However, for a tool with no annotations, it could benefit from more behavioral details like authentication or rate limits to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (username and offset) fully. The description repeats the parameter information in the 'Args' section but does not add meaningful semantics beyond what the schema provides, such as format examples or constraints. Baseline 3 is appropriate when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch') and resource ('a user's post reactions'), distinguishing it from sibling tools like get_user_actions or get_user_summary by focusing specifically on reactions (likes, etc.). The purpose is precise and not a tautology of the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context ('Use to see what content a user has reacted to, which can indicate their interests and values'), suggesting when this tool might be helpful. However, it does not explicitly state when to use this tool versus alternatives like get_user_actions or get_user_replies, nor does it provide exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/raidenrock/uscardforum-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server