Skip to main content
Glama
TheOneTrueNiz

Grokipedia MCP Server

get_page_sections

Read-onlyIdempotent

Extract the table of contents from Grokipedia articles to understand article structure and identify available sections before retrieving specific content.

Instructions

Get the table of contents (all section headers) for a Grokipedia article.

Use for: understanding article structure, finding which sections exist. Returns: list of sections with level (1=H1, 2=H2, etc.) and header text. Tips: Call before get_page_section to find valid section headers.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
slugYesUnique slug identifier of page to list sections for

Implementation Reference

  • Handler function that implements get_page_sections tool logic.
    async def get_page_sections(
        slug: Annotated[str, Field(description="Unique slug identifier of page to list sections for")],
        ctx: Context[ServerSession, AppContext] | None = None,
    ) -> CallToolResult:
        """Get the table of contents (all section headers) for a Grokipedia article.
    
        Use for: understanding article structure, finding which sections exist.
        Returns: list of sections with level (1=H1, 2=H2, etc.) and header text.
        Tips: Call before get_page_section to find valid section headers.
        """
        if ctx is None:
            raise ValueError("Context is required")
    
        await ctx.debug(f"Fetching section headers for: '{slug}'")
    
        try:
            client = ctx.request_context.lifespan_context.client
            result = await client.get_page(slug=slug, include_content=True)
    
            if not result.found or result.page is None:
                await ctx.warning(f"Page not found: '{slug}', searching for alternatives")
                search_result = await client.search(query=slug, limit=5)
                if search_result.results:
                    suggestions = [f"{r.title} ({r.slug})" for r in search_result.results[:3]]
                    await ctx.info(f"Found {len(search_result.results)} similar pages")
                    raise ValueError(
                        f"Page not found: {slug}. Did you mean one of these? {', '.join(suggestions)}"
                    )
                raise ValueError(f"Page not found: {slug}")
    
            page = result.page
            content = page.content or ""
            
            # Extract all markdown headers
            lines = content.split('\n')
            sections = []
            
            for line in lines:
                stripped = line.strip()
                if stripped.startswith("#"):
                    # Count the number of # symbols for header level
                    level = len(line) - len(line.lstrip("#"))
                    header_text = stripped.lstrip("#").strip()
                    if header_text:  # Only include non-empty headers
                        sections.append({"level": level, "header": header_text})
    
            await ctx.info(f"Found {len(sections)} section headers in '{page.title}'")
    
            if not sections:
                text_output = f"# {page.title}\n\nNo section headers found."
                structured = {
                    "slug": page.slug,
                    "title": page.title,
                    "sections": [],
                    "count": 0,
                }
            else:
                text_parts = [f"# {page.title}", "", f"Found {len(sections)} sections:", ""]
                for i, section in enumerate(sections, 1):
                    indent = "  " * (section["level"] - 1)
                    text_parts.append(
                        f"{i}. {indent}{section['header']} (Level {section['level']})"
                    )
    
                text_output = "\n".join(text_parts)
                structured = {
                    "slug": page.slug,
                    "title": page.title,
                    "sections": sections,
                    "count": len(sections),
                }
    
            return CallToolResult(
                content=[TextContent(type="text", text=text_output)],
                structuredContent=structured,
            )
    
        except GrokipediaNotFoundError as e:
            await ctx.error(f"Page not found: {e}")
            raise ValueError(f"Page not found: {slug}") from e
        except GrokipediaBadRequestError as e:
            await ctx.error(f"Bad request: {e}")
            raise ValueError(f"Invalid page slug: {e}") from e
        except GrokipediaNetworkError as e:
            await ctx.error(f"Network error: {e}")
            raise RuntimeError(f"Failed to connect to Grokipedia API: {e}") from e
        except GrokipediaAPIError as e:
            await ctx.error(f"API error: {e}")
            raise RuntimeError(f"Grokipedia API error: {e}") from e
  • Registration of the get_page_sections tool using @mcp.tool decorator.
    @mcp.tool(
        annotations=ToolAnnotations(
            readOnlyHint=True,
            destructiveHint=False,
            idempotentHint=True
        )
    )
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover safety profile (readOnly, idempotent, non-destructive). The description adds valuable behavioral context by describing the return value structure ('list of sections with level (1=H1, 2=H2, etc.) and header text') which compensates for the missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with clear semantic sections ('Use for:', 'Returns:', 'Tips:'). Information is front-loaded with the core purpose in the first sentence. No redundant or wasteful text; every line provides actionable guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter tool with complete schema coverage and safety annotations, the description is comprehensive. It explains the output format (despite no output schema), clarifies relationships to sibling tools, and provides workflow guidance. No significant gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for the 'slug' parameter ('Unique slug identifier of page to list sections for'). The description does not add additional parameter semantics, meeting the baseline expectation when schema documentation is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Get' and clear resource 'table of contents (all section headers) for a Grokipedia article'. It effectively distinguishes from siblings like get_page_content (full text) and get_page_section (single section) by emphasizing it retrieves ALL headers/structure.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Use for:' scenarios (understanding structure, finding sections) and critically includes 'Tips: Call before get_page_section to find valid section headers' - directly naming a sibling tool and establishing the correct workflow sequence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TheOneTrueNiz/mcp-grokipedia-tool'

If you have feedback or need assistance with the MCP directory API, please join our Discord server