get_article
Retrieve the complete content of a Wikipedia article by specifying its title using this MCP tool, enabling access to in-depth information for research or context purposes.
Instructions
Get the full content of a Wikipedia article.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes |
Implementation Reference
- wikipedia_mcp/server.py:81-86 (handler)MCP tool handler and registration for 'get_article'. This function is decorated with @server.tool() and delegates the actual Wikipedia API interaction to the WikipediaClient instance.@server.tool() def get_article(title: str) -> Dict[str, Any]: """Get the full content of a Wikipedia article.""" logger.info(f"Tool: Getting article: {title}") article = wikipedia_client.get_article(title) return article
- Core implementation of get_article in WikipediaClient class. Fetches the Wikipedia page using wikipediaapi, extracts summary, text, sections, categories, links, and handles errors if page does not exist.def get_article(self, title: str) -> Dict[str, Any]: """Get the full content of a Wikipedia article. Args: title: The title of the Wikipedia article. Returns: A dictionary containing the article information. """ try: page = self.wiki.page(title) if not page.exists(): return {"title": title, "exists": False, "error": "Page does not exist"} # Get sections sections = self._extract_sections(page.sections) # Get categories categories = [cat for cat in page.categories.keys()] # Get links links = [link for link in page.links.keys()] return { "title": page.title, "pageid": page.pageid, "summary": page.summary, "text": page.text, "url": page.fullurl, "sections": sections, "categories": categories, "links": links[:100], # Limit to 100 links to avoid too much data "exists": True, } except Exception as e: logger.error(f"Error getting Wikipedia article: {e}") return {"title": title, "exists": False, "error": str(e)}