get_docs
Search real-time documentation for libraries like Langchain, Llama-Index, MCP, and OpenAI using a query and library name. Retrieve updated information directly from the docs to support LLMs.
Instructions
Search the docs for a given query and library.
Supports langchain, llama-index, mcp, and openai.
Args:
query: The query to search for (e.g. "Chroma DB")
library: The library to search in (e.g. "langchain")
Returns:
Text from the docs
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| library | Yes | ||
| query | Yes |
Input Schema (JSON Schema)
{
"properties": {
"library": {
"title": "Library",
"type": "string"
},
"query": {
"title": "Query",
"type": "string"
}
},
"required": [
"query",
"library"
],
"title": "get_docsArguments",
"type": "object"
}
Implementation Reference
- server.py:56-82 (handler)The primary handler function for the 'get_docs' MCP tool. It is registered via the @mcp.tool() decorator and implements the core logic: validates library, constructs site-specific search query using Serper API, fetches top results, scrapes content with BeautifulSoup, and returns concatenated text.@mcp.tool() async def get_docs(query: str, library: str): """ Search the docs for a given query and library. Supports langchain, llama-index, mcp, and openai. Args: query: The query to search for (e.g. "Chroma DB") library: The library to search in (e.g. "langchain") Returns: Text from the docs """ if library not in docs_urls: raise ValueError(f"Library {library} not supported by this tool") query = f"site:{docs_urls[library]} {query}" # Serper search format for searching in specified site results = await search_web(query) if len(results["organic"]) == 0: return "No results found" text = "" for result in results["organic"]: text += await fetch_url(result["link"]) return text
- server.py:58-68 (schema)Input/output schema defined in the function docstring, specifying parameters 'query' (str) and 'library' (str), supported libraries, and return type as text from the docs.""" Search the docs for a given query and library. Supports langchain, llama-index, mcp, and openai. Args: query: The query to search for (e.g. "Chroma DB") library: The library to search in (e.g. "langchain") Returns: Text from the docs """
- server.py:17-22 (helper)Configuration dictionary mapping supported library names to their base documentation URLs, used to restrict searches to specific sites.docs_urls = { "langchain": "python.langchain.com/docs", "llama-index": "docs.llamaindex.ai/en/stable", "mcp": "modelcontextprotocol.io", "openai": "platform.openai.com/docs" }
- server.py:25-42 (helper)Helper function that performs web search using the Serper API (Google search), limited to 2 results, with timeout handling.async def search_web(query: str) -> dict | None: payload = json.dumps({"q": query, "num": 2}) headers = { "X-API-KEY": os.getenv("SERPER_API_KEY"), "Content-Type": "application/json" } async with httpx.AsyncClient() as client: try: response = await client.post( SERPER_URL, headers=headers, data=payload, timeout=30.0 ) response.raise_for_status() return response.json() except httpx.TimeoutException: return {"organic": []}
- server.py:45-54 (helper)Helper function that fetches content from a given URL using httpx, parses HTML with BeautifulSoup to extract text, with timeout handling.async def fetch_url(url: str): async with httpx.AsyncClient() as client: try: response = await client.get(url, timeout=30.0) soup = BeautifulSoup(response.text, "html.parser") text = soup.get_text() return text except httpx.TimeoutException: return "Timeout error"