get_page_citations
Read-onlyIdempotent
Retrieve source citations for Grokipedia articles to verify claims, support academic research, and access original reference materials.
Instructions
Get the source citations for a Grokipedia article.
Use for: finding source materials, verifying claims, academic research, fact-checking. Returns: list of citations with title, URL, and description. Tips: Great for grounding AI-generated knowledge with original sources.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Unique slug identifier of page to retrieve citations from | |
| limit | No | Maximum number of citations to return (optional, returns all if not specified) |
Implementation Reference
- grokipedia_mcp/server.py:305-382 (handler)The implementation of the 'get_page_citations' tool which fetches citations for a given page slug.
async def get_page_citations( slug: Annotated[str, Field(description="Unique slug identifier of page to retrieve citations from")], limit: Annotated[int | None, Field(description="Maximum number of citations to return (optional, returns all if not specified)", ge=1)] = None, ctx: Context[ServerSession, AppContext] | None = None, ) -> CallToolResult: """Get the source citations for a Grokipedia article. Use for: finding source materials, verifying claims, academic research, fact-checking. Returns: list of citations with title, URL, and description. Tips: Great for grounding AI-generated knowledge with original sources. """ if ctx is None: raise ValueError("Context is required") await ctx.debug(f"Fetching citations for: '{slug}' (limit={limit})") try: client = ctx.request_context.lifespan_context.client result = await client.get_page(slug=slug, include_content=False) if not result.found or result.page is None: await ctx.warning(f"Page not found: '{slug}'") raise ValueError(f"Page not found: {slug}") page = result.page all_citations = page.citations or [] total_count = len(all_citations) citations = all_citations[:limit] if limit else all_citations is_limited = limit and total_count > limit await ctx.info( f"Retrieved {len(citations)} of {total_count} citations for: '{page.title}'" ) if not all_citations: text_output = f"# {page.title}\n\nNo citations found." structured = { "slug": page.slug, "title": page.title, "citations": [], "total_count": 0, "returned_count": 0, } else: header = f"# {page.title}\n\n" if is_limited: header += f"Showing {len(citations)} of {total_count} citations:\n" else: header += f"Found {total_count} citations:\n" text_parts = [header] for i, citation in enumerate(citations, 1): text_parts.append(f"{i}. **{citation.title}**") text_parts.append(f" URL: {citation.url}") if citation.description: text_parts.append(f" Description: {citation.description}") text_parts.append("") if is_limited: text_parts.append(f"... and {total_count - len(citations)} more citations") text_output = "\n".join(text_parts) structured = { "slug": page.slug, "title": page.title, "citations": [c.model_dump() for c in citations], "total_count": total_count, "returned_count": len(citations), } if is_limited: structured["_limited"] = True return CallToolResult( content=[TextContent(type="text", text=text_output)], structuredContent=structured, )