get_page
Retrieve page content from MediaWiki sites like Wikipedia and Fandom by specifying the page title.
Instructions
Get a page from mediawiki.org Args: title: The title of the page to get, which can be found in title field of the search results Returns: The page content
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes |
Implementation Reference
- src/mediawiki_mcp_server/main.py:73-83 (handler)The main handler function for the 'get_page' tool. It is decorated with @mcp.tool() for registration and uses the make_request helper to fetch the page content from the MediaWiki API given the page title.@mcp.tool() async def get_page(title: str): """Get a page from mediawiki.org Args: title: The title of the page to get, which can be found in title field of the search results Returns: The page content """ path = f"page/{title}" response = await make_request(path, {}) return response
- Docstring providing the input schema (title: str) and output description for the get_page tool, used by FastMCP for validation."""Get a page from mediawiki.org Args: title: The title of the page to get, which can be found in title field of the search results Returns: The page content
- Supporting helper function that performs the actual HTTP request to the MediaWiki API, handling proxies, redirects, and errors. Invoked by the get_page handler.async def make_request(path: str, params: dict) -> httpx.Response: headers = { "User-Agent": USER_AGENT, } url = config.base_url + config.path_prefix + path proxies = get_proxy_settings() async with httpx.AsyncClient(proxies=proxies, follow_redirects=True) as client: try: response = await client.get(url, headers=headers, params=params) if response.status_code in (301, 302, 303, 307, 308): final_response = await client.get( response.headers["Location"], headers=headers ) return final_response.json() return response.json() except httpx.HTTPStatusError as e: logger.error(e) return {"error": e}