Skip to main content
Glama
GodisinHisHeaven

USCardForum MCP Server

get_all_topic_posts

Fetch all posts from a USCardForum topic with automatic pagination, allowing control over post ranges and limits for efficient data retrieval.

Instructions

Fetch all posts from a topic with automatic pagination.

Args:
    topic_id: The numeric topic ID
    include_raw: Include markdown source (default: False)
    start_post_number: First post to fetch (default: 1)
    end_post_number: Last post to fetch (optional, fetches to end if not set)
    max_posts: Maximum number of posts to return (optional safety limit)

This automatically handles pagination to fetch multiple batches.

IMPORTANT: For topics with many posts (>100), use max_posts to limit
the response size. You can always fetch more with start_post_number.

Use cases:
- Fetch entire small topic: get_all_topic_posts(topic_id=123)
- Fetch first 50 posts: get_all_topic_posts(topic_id=123, max_posts=50)
- Fetch posts 51-100: get_all_topic_posts(topic_id=123, start_post_number=51, max_posts=50)
- Fetch specific range: get_all_topic_posts(topic_id=123, start=10, end=30)

Returns the same Post structure as get_topic_posts but for all matching posts.

Pro tip: Use get_topic_info first to check post_count before deciding
whether to fetch all or paginate manually.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topic_idYesThe numeric topic ID
include_rawNoInclude markdown source (default: False)
start_post_numberNoFirst post to fetch (default: 1)
end_post_numberNoLast post to fetch (optional, fetches to end if not set)
max_postsNoMaximum number of posts to return (optional safety limit)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The primary MCP tool handler for 'get_all_topic_posts'. Decorated with @mcp.tool(), defines input schema via Annotated[Field], comprehensive docstring, and delegates execution to the DiscourseClient.get_all_topic_posts method.
    @mcp.tool()
    def get_all_topic_posts(
        topic_id: Annotated[
            int,
            Field(description="The numeric topic ID"),
        ],
        include_raw: Annotated[
            bool,
            Field(default=False, description="Include markdown source (default: False)"),
        ] = False,
        start_post_number: Annotated[
            int,
            Field(default=1, description="First post to fetch (default: 1)"),
        ] = 1,
        end_post_number: Annotated[
            int | None,
            Field(default=None, description="Last post to fetch (optional, fetches to end if not set)"),
        ] = None,
        max_posts: Annotated[
            int | None,
            Field(default=None, description="Maximum number of posts to return (optional safety limit)"),
        ] = None,
    ) -> list[Post]:
        """
        Fetch all posts from a topic with automatic pagination.
    
        Args:
            topic_id: The numeric topic ID
            include_raw: Include markdown source (default: False)
            start_post_number: First post to fetch (default: 1)
            end_post_number: Last post to fetch (optional, fetches to end if not set)
            max_posts: Maximum number of posts to return (optional safety limit)
    
        This automatically handles pagination to fetch multiple batches.
    
        IMPORTANT: For topics with many posts (>100), use max_posts to limit
        the response size. You can always fetch more with start_post_number.
    
        Use cases:
        - Fetch entire small topic: get_all_topic_posts(topic_id=123)
        - Fetch first 50 posts: get_all_topic_posts(topic_id=123, max_posts=50)
        - Fetch posts 51-100: get_all_topic_posts(topic_id=123, start_post_number=51, max_posts=50)
        - Fetch specific range: get_all_topic_posts(topic_id=123, start=10, end=30)
    
        Returns the same Post structure as get_topic_posts but for all matching posts.
    
        Pro tip: Use get_topic_info first to check post_count before deciding
        whether to fetch all or paginate manually.
        """
        return get_client().get_all_topic_posts(
            topic_id,
            include_raw=include_raw,
            start_post_number=start_post_number,
            end_post_number=end_post_number,
            max_posts=max_posts,
        )
  • Pydantic model defining the structure of each Post object returned by the tool (output schema: list[Post]). Includes fields like post_number, username, cooked/raw content, timestamps, and engagement metrics.
    class Post(BaseModel):
        """A single post within a topic."""
    
        id: int = Field(..., description="Unique post identifier")
        post_number: int = Field(..., description="Position in topic (1-indexed)")
        username: str = Field(..., description="Author's username")
        cooked: str | None = Field(None, description="HTML-rendered content")
        raw: str | None = Field(None, description="Raw markdown source")
        created_at: datetime | None = Field(None, description="When posted")
        updated_at: datetime | None = Field(None, description="Last edit time")
        like_count: int = Field(0, description="Number of likes")
        reply_count: int = Field(0, description="Number of direct replies")
        reply_to_post_number: int | None = Field(
            None, description="Post number this replies to"
        )
    
        class Config:
            extra = "ignore"
  • Imports the get_all_topic_posts tool function (line 20) along with all other MCP tools into the main server entrypoint module. This ensures the tool is loaded and registered automatically via its @mcp.tool() decorator when the MCP server starts.
    from uscardforum.server_tools import (
        analyze_user,
        bookmark_post,
        compare_cards,
        find_data_points,
        get_all_topic_posts,
        get_categories,
        get_current_session,
        get_hot_topics,
        get_new_topics,
        get_notifications,
        get_top_topics,
        get_topic_info,
        get_topic_posts,
        get_user_actions,
        get_user_badges,
        get_user_followers,
        get_user_following,
        get_user_reactions,
        get_user_replies,
        get_user_summary,
        get_user_topics,
        list_users_with_badge,
        login,
        research_topic,
        resource_categories,
        resource_hot_topics,
        resource_new_topics,
        search_forum,
        subscribe_topic,
    )
  • Re-exports the get_all_topic_posts tool from the topics submodule, facilitating its import in the server.py entrypoint.
    from .topics import get_topic_info, get_topic_posts, get_all_topic_posts
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: automatic pagination, safety limits with max_posts, and the ability to fetch specific ranges. It mentions the return structure ('Returns the same Post structure as get_topic_posts') and provides practical tips. However, it doesn't cover potential errors, rate limits, or authentication needs, which keeps it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, args, important notes, use cases, returns, pro tip). It is front-loaded with the core purpose and includes only relevant details. However, it is slightly verbose with repetitive examples, which prevents a perfect score, but every sentence adds value to guide usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, automatic pagination) and the presence of an output schema (which handles return values), the description is complete. It covers purpose, parameters, usage guidelines, behavioral traits, and integration with sibling tools. The output schema means the description doesn't need to explain return values in detail, and it effectively addresses all other aspects needed for correct tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds significant value beyond the schema by explaining parameter interactions and use cases in the 'Args' section and examples. It clarifies how parameters like start_post_number, end_post_number, and max_posts work together, and provides default behaviors. This enhances understanding beyond the schema's basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fetch all posts from a topic with automatic pagination.' It specifies the verb ('fetch'), resource ('posts from a topic'), and key behavior ('automatic pagination'). It distinguishes from sibling 'get_topic_posts' by emphasizing the automatic pagination for fetching all posts rather than manual pagination.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It includes an 'IMPORTANT' note for topics with many posts (>100) to use max_posts, advises using 'get_topic_info first to check post_count before deciding whether to fetch all or paginate manually,' and gives specific use cases with examples. It clearly differentiates from 'get_topic_posts' by handling pagination automatically.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/GodisinHisHeaven/uscardforum-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server