Skip to main content
Glama
raidenrock

USCardForum MCP Server

by raidenrock

get_topic_posts

Fetch posts from a USCardForum topic in batches for paginated reading, starting at a specified position with optional markdown source.

Instructions

Fetch a batch of posts from a topic starting at a specific position.

Args:
    topic_id: The numeric topic ID
    post_number: Which post number to start from (default: 1 = first post)
    include_raw: Include raw markdown source (default: False, returns HTML)

This fetches ~20 posts per call starting from post_number.
Use for paginated reading of topics.

Returns a list of Post objects with:
- post_number: Position in topic (1, 2, 3...)
- username: Author's username
- cooked: HTML content of the post
- raw: Markdown source (if include_raw=True)
- created_at: When posted
- updated_at: Last edit time
- like_count: Number of likes
- reply_count: Number of direct replies
- reply_to_post_number: Which post this replies to (if any)

Pagination example:
1. Call with post_number=1, get posts 1-20
2. Call with post_number=21, get posts 21-40
3. Continue until no posts returned

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topic_idYesThe numeric topic ID
post_numberNoWhich post number to start from (default: 1 = first post)
include_rawNoInclude raw markdown source (default: False, returns HTML)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a read operation (implied by 'Fetch'), specifies batch size ('~20 posts per call'), explains pagination behavior, and details the return format with Post object fields. This covers essential behavioral aspects without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, args, returns, example) and front-loaded key information. It is appropriately sized for the tool's complexity, though the detailed return field list and pagination example are slightly verbose but justified for clarity. A minor deduction for length keeps it at 4.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, and an output schema (implied by the detailed return description), the description is complete. It covers purpose, usage, parameters, behavior, and output format, providing all necessary context for an agent to use the tool effectively without gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning the input schema already documents all parameters thoroughly. The description repeats parameter info in the 'Args' section without adding significant meaning beyond the schema, such as edge cases or constraints. This meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Fetch') and resource ('posts from a topic'), and distinguishes it from siblings like 'get_all_topic_posts' by specifying it fetches a batch starting at a position, not all posts. This explicit differentiation earns the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: it states 'Use for paginated reading of topics' and includes a detailed pagination example with steps, clearly indicating when to use this tool versus alternatives like 'get_all_topic_posts' for non-paginated access. This comprehensive guidance merits a score of 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/raidenrock/uscardforum-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server