Skip to main content
Glama
TheOneTrueNiz

Grokipedia MCP Server

get_page_sections

Read-onlyIdempotent

Extract the table of contents from Grokipedia articles to understand article structure and identify available sections before retrieving specific content.

Instructions

Get the table of contents (all section headers) for a Grokipedia article.

Use for: understanding article structure, finding which sections exist. Returns: list of sections with level (1=H1, 2=H2, etc.) and header text. Tips: Call before get_page_section to find valid section headers.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
slugYesUnique slug identifier of page to list sections for

Implementation Reference

  • Handler function that implements get_page_sections tool logic.
    async def get_page_sections(
        slug: Annotated[str, Field(description="Unique slug identifier of page to list sections for")],
        ctx: Context[ServerSession, AppContext] | None = None,
    ) -> CallToolResult:
        """Get the table of contents (all section headers) for a Grokipedia article.
    
        Use for: understanding article structure, finding which sections exist.
        Returns: list of sections with level (1=H1, 2=H2, etc.) and header text.
        Tips: Call before get_page_section to find valid section headers.
        """
        if ctx is None:
            raise ValueError("Context is required")
    
        await ctx.debug(f"Fetching section headers for: '{slug}'")
    
        try:
            client = ctx.request_context.lifespan_context.client
            result = await client.get_page(slug=slug, include_content=True)
    
            if not result.found or result.page is None:
                await ctx.warning(f"Page not found: '{slug}', searching for alternatives")
                search_result = await client.search(query=slug, limit=5)
                if search_result.results:
                    suggestions = [f"{r.title} ({r.slug})" for r in search_result.results[:3]]
                    await ctx.info(f"Found {len(search_result.results)} similar pages")
                    raise ValueError(
                        f"Page not found: {slug}. Did you mean one of these? {', '.join(suggestions)}"
                    )
                raise ValueError(f"Page not found: {slug}")
    
            page = result.page
            content = page.content or ""
            
            # Extract all markdown headers
            lines = content.split('\n')
            sections = []
            
            for line in lines:
                stripped = line.strip()
                if stripped.startswith("#"):
                    # Count the number of # symbols for header level
                    level = len(line) - len(line.lstrip("#"))
                    header_text = stripped.lstrip("#").strip()
                    if header_text:  # Only include non-empty headers
                        sections.append({"level": level, "header": header_text})
    
            await ctx.info(f"Found {len(sections)} section headers in '{page.title}'")
    
            if not sections:
                text_output = f"# {page.title}\n\nNo section headers found."
                structured = {
                    "slug": page.slug,
                    "title": page.title,
                    "sections": [],
                    "count": 0,
                }
            else:
                text_parts = [f"# {page.title}", "", f"Found {len(sections)} sections:", ""]
                for i, section in enumerate(sections, 1):
                    indent = "  " * (section["level"] - 1)
                    text_parts.append(
                        f"{i}. {indent}{section['header']} (Level {section['level']})"
                    )
    
                text_output = "\n".join(text_parts)
                structured = {
                    "slug": page.slug,
                    "title": page.title,
                    "sections": sections,
                    "count": len(sections),
                }
    
            return CallToolResult(
                content=[TextContent(type="text", text=text_output)],
                structuredContent=structured,
            )
    
        except GrokipediaNotFoundError as e:
            await ctx.error(f"Page not found: {e}")
            raise ValueError(f"Page not found: {slug}") from e
        except GrokipediaBadRequestError as e:
            await ctx.error(f"Bad request: {e}")
            raise ValueError(f"Invalid page slug: {e}") from e
        except GrokipediaNetworkError as e:
            await ctx.error(f"Network error: {e}")
            raise RuntimeError(f"Failed to connect to Grokipedia API: {e}") from e
        except GrokipediaAPIError as e:
            await ctx.error(f"API error: {e}")
            raise RuntimeError(f"Grokipedia API error: {e}") from e
  • Registration of the get_page_sections tool using @mcp.tool decorator.
    @mcp.tool(
        annotations=ToolAnnotations(
            readOnlyHint=True,
            destructiveHint=False,
            idempotentHint=True
        )
    )

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TheOneTrueNiz/mcp-grokipedia-tool'

If you have feedback or need assistance with the MCP directory API, please join our Discord server