Skip to main content
Glama

list_questions

Retrieve and filter prediction questions from Fatebook to track forecasts, monitor unresolved items, or review historical predictions with optional detailed views.

Instructions

List Fatebook questions with optional filtering

Returns a list of Question objects. By default returns core fields only. Set detailed=True to include all available fields (forecasts, comments, etc.).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
apiKeyNo
resolvedNo
unresolvedNo
searchStringNo
limitNo
cursorNo
detailedNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The primary handler function for the 'list_questions' tool. Decorated with @mcp.tool() for registration and execution. Fetches questions from Fatebook API, parses with Pydantic, and returns structured list.
    @mcp.tool()
    async def list_questions(
        ctx: Context,
        apiKey: str = "",
        resolved: bool = False,
        unresolved: bool = False,
        searchString: str = "",
        limit: int = 100,
        cursor: str = "",
        detailed: bool = False,
    ) -> QuestionsList:
        """List Fatebook questions with optional filtering
    
        Returns a list of Question objects. By default returns core fields only.
        Set detailed=True to include all available fields (forecasts, comments, etc.).
        """
    
        await ctx.info(
            f"list_questions called with resolved={resolved}, unresolved={unresolved}, searchString='{searchString}', limit={limit}, detailed={detailed}"
        )
    
        api_key = apiKey or os.getenv("FATEBOOK_API_KEY")
        if not api_key:
            await ctx.error("API key is required but not provided")
            raise ValueError(
                "API key is required (provide as parameter or set FATEBOOK_API_KEY environment variable)"
            )
    
        params: dict[str, Any] = {"apiKey": api_key}
    
        # Add optional parameters
        if resolved:
            params["resolved"] = resolved
        if unresolved:
            params["unresolved"] = unresolved
        if searchString:
            params["searchString"] = searchString
        params["limit"] = limit
        if cursor:
            params["cursor"] = cursor
    
        await ctx.debug(f"Making API request with params: {params}")
    
        try:
            async with httpx.AsyncClient() as client:
                response = await client.get("https://fatebook.io/api/v0/getQuestions", params=params)
                response.raise_for_status()
    
                data = response.json()
    
                # Parse response using Pydantic model
                questions_response = QuestionsResponse(**data)
                questions = questions_response.items
    
                await ctx.info(f"Successfully retrieved {len(questions)} questions")
    
                # Return as QuestionsList with 'result' field to match MCP schema expectations
                return QuestionsList(result=questions)
    
        except httpx.HTTPError as e:
            await ctx.error(f"HTTP error occurred: {e}")
            raise
        except Exception as e:
            await ctx.error(f"Unexpected error occurred: {e}")
            raise
  • Pydantic schemas used by list_questions: QuestionsResponse parses API data, QuestionsList wraps result for MCP tool response.
    class QuestionsResponse(BaseModel):
        """Response from getQuestions endpoint"""
    
        items: List[Question]
        cursor: Optional[str] = None
    
    
    class QuestionsList(BaseModel):
        """List of questions for MCP responses - matches expected MCP schema"""
    
        result: List[Question]
    
        class Config:
            populate_by_name = True
            by_alias = True
  • Core Question Pydantic model used in list_questions responses, defining structure for individual questions with fields, validators, and formatting methods.
    class Question(BaseModel):
        """Fatebook question model with optional fields for detailed responses"""
    
        # Core fields (id is optional since getQuestion doesn't return it)
        id: Optional[str] = None
        title: str
        type: Literal["BINARY", "NUMERIC", "MULTIPLE_CHOICE"] = "BINARY"
        resolved: bool = False
    
        # Timestamps
        created_at: datetime = Field(alias="createdAt")
        resolve_by: datetime = Field(alias="resolveBy")
        resolved_at: Optional[datetime] = Field(None, alias="resolvedAt")
    
        # Resolution information
        resolution: Optional[Literal["YES", "NO", "AMBIGUOUS"]] = None
    
        # Additional content (typically in detailed view)
        notes: Optional[str] = None
    
        # Related data (typically in detailed view)
        forecasts: Optional[List[Forecast]] = Field(
            default=None, description="List of forecasts on this question"
        )
        tags: Optional[List[Tag]] = Field(default=None, description="Tags associated with the question")
        comments: Optional[List[Comment]] = Field(default=None, description="Comments on the question")
    
        # Visibility settings (typically in detailed view)
        shared_publicly: Optional[bool] = Field(None, alias="sharedPublicly")
        unlisted: Optional[bool] = None
        hide_forecasts_until: Optional[datetime] = Field(None, alias="hideForecastsUntil")
        share_with_lists: Optional[List[str]] = Field(None, alias="shareWithLists")
        share_with_email: Optional[List[str]] = Field(None, alias="shareWithEmail")
    
        # Additional fields from getQuestion endpoint
        your_latest_prediction: Optional[str] = Field(None, alias="yourLatestPrediction")
        question_scores: Optional[List] = Field(None, alias="questionScores")
    
        class Config:
            populate_by_name = True
            by_alias = True  # Use aliases when serializing
  • Import of schema models required for list_questions tool.
    from .models import Question, QuestionReference, QuestionsList, QuestionsResponse
    
    load_dotenv()
    
    mcp = FastMCP("Fatebook MCP Server")
    
    
    # Type alias for httpx params to handle mypy type checking
    ParamsType = dict[str, str | int | float | bool | None]
    
    
    @mcp.tool()
  • Test file referencing list_questions as an expected registered tool.
    expected_tools = {"list_questions", "create_question", "count_forecasts"}
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It mentions the return type ('Question objects') and the effect of 'detailed=True', but lacks critical behavioral details such as pagination behavior (implied by 'cursor' but not explained), authentication needs (implied by 'apiKey' but not stated), rate limits, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and structured efficiently in three sentences. Each sentence adds value: listing with filtering, return type, and detailed mode. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters with 0% schema coverage and no annotations, but with an output schema (which handles return values), the description is moderately complete. It covers the basic operation and detailed mode, but misses key parameter semantics and behavioral context, making it adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains 'detailed=True' adds fields, but doesn't clarify other parameters like 'resolved/unresolved' (mutually exclusive?), 'searchString' (what fields?), 'limit' (max?), or 'cursor' (pagination token). The description adds some value but leaves many parameters underspecified.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('Fatebook questions') with optional filtering, making the purpose specific. However, it doesn't explicitly differentiate from sibling tools like 'get_question' (singular) or 'count_forecasts', leaving room for minor ambiguity in sibling comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for listing questions with filtering, but doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_question' for a single question or 'count_forecasts' for counts. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/an1lam/fatebook-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server