Skip to main content
Glama

get_page

Retrieve complete page information from Grokipedia including metadata, content preview, and citation summaries using a unique slug identifier.

Instructions

Get complete page information including metadata, content preview, and citations summary.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
slugYesUnique slug identifier of the page to retrieve
max_content_lengthNoMaximum length of content to return (default: 5000)

Implementation Reference

  • Registers the 'get_page' tool with the FastMCP server using the @mcp.tool decorator, specifying tool annotations indicating it is read-only, non-destructive, and idempotent.
    @mcp.tool(
        annotations=ToolAnnotations(
            readOnlyHint=True,
            destructiveHint=False,
            idempotentHint=True
        )
    )
  • The core handler function for the 'get_page' tool. It fetches the page using the Grokipedia API client, handles cases where the page is not found by suggesting alternatives, truncates long content, formats a markdown preview with metadata, citations summary, and returns a CallToolResult with both text and structured page data.
    async def get_page(
        slug: Annotated[str, Field(description="Unique slug identifier of the page to retrieve")],
        max_content_length: Annotated[int, Field(description="Maximum length of content to return (default: 5000)", ge=100)] = 5000,
        ctx: Context[ServerSession, AppContext] | None = None,
    ) -> CallToolResult:
        """Get complete page information including metadata, content preview, and citations summary."""
        if ctx is None:
            raise ValueError("Context is required")
    
        await ctx.debug(f"Fetching page: '{slug}'")
    
        try:
            client = ctx.request_context.lifespan_context.client
            result = await client.get_page(slug=slug, include_content=True)
    
            if not result.found or result.page is None:
                await ctx.warning(f"Page not found: '{slug}', searching for alternatives")
                search_result = await client.search(query=slug, limit=5)
                if search_result.results:
                    suggestions = [f"{r.title} ({r.slug})" for r in search_result.results[:3]]
                    await ctx.info(f"Found {len(search_result.results)} similar pages")
                    raise ValueError(
                        f"Page not found: {slug}. Did you mean one of these? {', '.join(suggestions)}"
                    )
                raise ValueError(f"Page not found: {slug}")
    
            await ctx.info(f"Retrieved page: '{result.page.title}' ({slug})")
            
            page = result.page
            content_len = len(page.content) if page.content else 0
            is_truncated = content_len > max_content_length
            
            text_parts = [
                f"# {page.title}",
                "",
                f"**Slug:** {page.slug}",
            ]
            
            if page.description:
                text_parts.extend(["", f"**Description:** {page.description}", ""])
            
            if page.content:
                preview_length = min(1000, max_content_length)
                text_parts.extend(["", "## Content Preview", "", page.content[:preview_length]])
                if content_len > preview_length:
                    text_parts.append(f"\n... (showing first {preview_length} of {content_len} chars)")
            
            if page.citations:
                text_parts.extend(["", f"## Citations ({len(page.citations)} total)", ""])
                for i, citation in enumerate(page.citations[:5], 1):
                    text_parts.append(f"{i}. {citation.title}: {citation.url}")
                if len(page.citations) > 5:
                    text_parts.append(f"... and {len(page.citations) - 5} more")
            
            page_dict = page.model_dump()
            if is_truncated:
                page_dict["content"] = page.content[:max_content_length]
                page_dict["_content_truncated"] = True
                page_dict["_original_length"] = content_len
                await ctx.warning(
                    f"Content truncated from {content_len} to {max_content_length} chars. "
                    f"Use get_page_content tool for full content access."
                )
            
            return CallToolResult(
                content=[TextContent(type="text", text="\n".join(text_parts))],
                structuredContent=page_dict,
            )
    
        except GrokipediaNotFoundError as e:
            await ctx.error(f"Page not found: {e}")
            raise ValueError(f"Page not found: {slug}") from e
        except GrokipediaBadRequestError as e:
            await ctx.error(f"Bad request: {e}")
            raise ValueError(f"Invalid page slug: {e}") from e
        except GrokipediaNetworkError as e:
            await ctx.error(f"Network error: {e}")
            raise RuntimeError(f"Failed to connect to Grokipedia API: {e}") from e
        except GrokipediaAPIError as e:
            await ctx.error(f"API error: {e}")
            raise RuntimeError(f"Grokipedia API error: {e}") from e
  • Input schema defined via Annotated parameters with Pydantic Field descriptions and constraints for the get_page tool.
    async def get_page(
        slug: Annotated[str, Field(description="Unique slug identifier of the page to retrieve")],
        max_content_length: Annotated[int, Field(description="Maximum length of content to return (default: 5000)", ge=100)] = 5000,
        ctx: Context[ServerSession, AppContext] | None = None,
    ) -> CallToolResult:

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/skymoore/grokipedia-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server