get_docs
Search documentation for specific queries and libraries, including langchain, openai, and llama-index. Retrieve relevant text from the latest docs for streamlined reference.
Instructions
Search the latest docs for a given query and library. Supports langchain, openai, and llama-index.
Args: query: The query to search for (e.g. "Chroma DB") library: The library to search in (e.g. "langchain")
Returns: Text from the docs
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| library | Yes | ||
| query | Yes |
Implementation Reference
- main.py:48-72 (handler)The main handler function for the 'get_docs' tool, decorated with @mcp.tool() for registration. It performs a site-specific web search for the query in the specified library's documentation site, fetches the top results, and returns their text content.@mcp.tool() async def get_docs(query: str, library: str): """ Search the latest docs for a given query and library. Supports langchain, openai, and llama-index. Args: query: The query to search for (e.g. "Chroma DB") library: The library to search in (e.g. "langchain") Returns: Text from the docs """ if library not in docs_urls: raise ValueError(f"Library {library} not supported by this tool") query = f"site:{docs_urls[library]} {query}" results = await search_web(query) if len(results["organic"]) == 0: return "No results found" text = "" for result in results["organic"]: text += await fetch_url(result["link"]) return text
- main.py:20-37 (helper)Helper function to perform web search using Serper API, limited to 2 results, used by get_docs for finding relevant doc pages.async def search_web(query: str) -> dict | None: payload = json.dumps({"q": query, "num": 2}) headers = { "X-API-KEY": os.getenv("SERPER_API_KEY"), "Content-Type": "application/json", } async with httpx.AsyncClient() as client: try: response = await client.post( SERPER_URL, headers=headers, data=payload, timeout=30.0 ) response.raise_for_status() return response.json() except httpx.TimeoutException: return {"organic": []}
- main.py:38-47 (helper)Helper function to fetch and extract text content from a URL using BeautifulSoup, used by get_docs to get doc page texts.async def fetch_url(url: str): async with httpx.AsyncClient() as client: try: response = await client.get(url, timeout=30.0) soup = BeautifulSoup(response.text, "html.parser") text = soup.get_text() return text except httpx.TimeoutException: return "Timeout error"
- main.py:14-18 (helper)Configuration dictionary mapping library names to their documentation site base URLs, used by get_docs for site-specific searches.docs_urls = { "langchain": "python.langchain.com/docs", "llama-index": "docs.llamaindex.ai/en/stable", "openai": "platform.openai.com/docs", }