Skip to main content
Glama

get_page_citations

Retrieve citation lists for specific Grokipedia pages to verify sources and support research with optional limit controls.

Instructions

Get the citations list for a specific page.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
slugYesUnique slug identifier of page to retrieve citations from
limitNoMaximum number of citations to return (optional, returns all if not specified)

Implementation Reference

  • The core handler function for the 'get_page_citations' tool. It retrieves citations for a specified page slug via the Grokipedia API client, supports optional limiting, formats a human-readable markdown list of citations (with title, URL, description), provides structured JSON output including page details and citation data, logs debug/info/warning/error messages, and handles specific exceptions like not found, bad request, network, and API errors by raising appropriate ValueError or RuntimeError.
    async def get_page_citations(
        slug: Annotated[str, Field(description="Unique slug identifier of page to retrieve citations from")],
        limit: Annotated[int | None, Field(description="Maximum number of citations to return (optional, returns all if not specified)", ge=1)] = None,
        ctx: Context[ServerSession, AppContext] | None = None,
    ) -> CallToolResult:
        """Get the citations list for a specific page."""
        if ctx is None:
            raise ValueError("Context is required")
    
        await ctx.debug(f"Fetching citations for: '{slug}' (limit={limit})")
    
        try:
            client = ctx.request_context.lifespan_context.client
            result = await client.get_page(slug=slug, include_content=False)
    
            if not result.found or result.page is None:
                await ctx.warning(f"Page not found: '{slug}'")
                raise ValueError(f"Page not found: {slug}")
    
            page = result.page
            all_citations = page.citations or []
            total_count = len(all_citations)
            
            citations = all_citations[:limit] if limit else all_citations
            is_limited = limit and total_count > limit
            
            await ctx.info(
                f"Retrieved {len(citations)} of {total_count} citations for: '{page.title}'"
            )
            
            if not all_citations:
                text_output = f"# {page.title}\n\nNo citations found."
                structured = {
                    "slug": page.slug,
                    "title": page.title,
                    "citations": [],
                    "total_count": 0,
                    "returned_count": 0,
                }
            else:
                header = f"# {page.title}\n\n"
                if is_limited:
                    header += f"Showing {len(citations)} of {total_count} citations:\n"
                else:
                    header += f"Found {total_count} citations:\n"
                
                text_parts = [header]
                for i, citation in enumerate(citations, 1):
                    text_parts.append(f"{i}. **{citation.title}**")
                    text_parts.append(f"   URL: {citation.url}")
                    if citation.description:
                        text_parts.append(f"   Description: {citation.description}")
                    text_parts.append("")
                
                if is_limited:
                    text_parts.append(f"... and {total_count - len(citations)} more citations")
                
                text_output = "\n".join(text_parts)
                structured = {
                    "slug": page.slug,
                    "title": page.title,
                    "citations": [c.model_dump() for c in citations],
                    "total_count": total_count,
                    "returned_count": len(citations),
                }
                
                if is_limited:
                    structured["_limited"] = True
            
            return CallToolResult(
                content=[TextContent(type="text", text=text_output)],
                structuredContent=structured,
            )
    
        except GrokipediaNotFoundError as e:
            await ctx.error(f"Page not found: {e}")
            raise ValueError(f"Page not found: {slug}") from e
        except GrokipediaBadRequestError as e:
            await ctx.error(f"Bad request: {e}")
            raise ValueError(f"Invalid page slug: {e}") from e
        except GrokipediaNetworkError as e:
            await ctx.error(f"Network error: {e}")
            raise RuntimeError(f"Failed to connect to Grokipedia API: {e}") from e
        except GrokipediaAPIError as e:
            await ctx.error(f"API error: {e}")
            raise RuntimeError(f"Grokipedia API error: {e}") from e
  • Registers the 'get_page_citations' tool with the MCP framework using the @mcp.tool decorator, specifying ToolAnnotations indicating it is read-only, non-destructive, and idempotent.
    @mcp.tool(
        annotations=ToolAnnotations(
            readOnlyHint=True,
            destructiveHint=False,
            idempotentHint=True
        )
    )
  • Input schema defined via Annotated types with Pydantic Field metadata providing descriptions and validation (e.g., limit ge=1). Output type is CallToolResult with TextContent and structuredContent.
        slug: Annotated[str, Field(description="Unique slug identifier of page to retrieve citations from")],
        limit: Annotated[int | None, Field(description="Maximum number of citations to return (optional, returns all if not specified)", ge=1)] = None,
        ctx: Context[ServerSession, AppContext] | None = None,
    ) -> CallToolResult:

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/skymoore/grokipedia-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server