get_docs
Search documentation for langchain, openai, and llama-index libraries to find specific information using targeted queries.
Instructions
Search the latest docs for a given query and library. Supports langchain, openai, and llama-index.
Args: query: The query to search for (e.g. "Chroma DB") library: The library to search in (e.g. "langchain")
Returns: Text from the docs
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| library | Yes |
Implementation Reference
- main.py:48-72 (handler)The core handler function for the 'get_docs' MCP tool. Registered via @mcp.tool() decorator. Performs site-specific Google search using Serper API and aggregates text content from top results.@mcp.tool() async def get_docs(query: str, library: str): """ Search the latest docs for a given query and library. Supports langchain, openai, and llama-index. Args: query: The query to search for (e.g. "Chroma DB") library: The library to search in (e.g. "langchain") Returns: Text from the docs """ if library not in docs_urls: raise ValueError(f"Library {library} not supported by this tool") query = f"site:{docs_urls[library]} {query}" results = await search_web(query) if len(results["organic"]) == 0: return "No results found" text = "" for result in results["organic"]: text += await fetch_url(result["link"]) return text
- main.py:20-37 (helper)Helper function to perform web search using the Serper API, returning search results.async def search_web(query: str) -> dict | None: payload = json.dumps({"q": query, "num": 2}) headers = { "X-API-KEY": os.getenv("SERPER_API_KEY"), "Content-Type": "application/json", } async with httpx.AsyncClient() as client: try: response = await client.post( SERPER_URL, headers=headers, data=payload, timeout=30.0 ) response.raise_for_status() return response.json() except httpx.TimeoutException: return {"organic": []}
- main.py:38-47 (helper)Helper function to fetch content from a URL and extract plain text using BeautifulSoup.async def fetch_url(url: str): async with httpx.AsyncClient() as client: try: response = await client.get(url, timeout=30.0) soup = BeautifulSoup(response.text, "html.parser") text = soup.get_text() return text except httpx.TimeoutException: return "Timeout error"
- main.py:14-18 (helper)Configuration dictionary mapping supported library names to their documentation site URLs, used for site-specific searches.docs_urls = { "langchain": "python.langchain.com/docs", "llama-index": "docs.llamaindex.ai/en/stable", "openai": "platform.openai.com/docs", }
- main.py:49-60 (schema)Type hints and docstring defining the input schema (query: str, library: str ∈ ['langchain','llama-index','openai']) and output (str: text from docs).async def get_docs(query: str, library: str): """ Search the latest docs for a given query and library. Supports langchain, openai, and llama-index. Args: query: The query to search for (e.g. "Chroma DB") library: The library to search in (e.g. "langchain") Returns: Text from the docs """