get_docs
Search and retrieve documentation from LangChain, LlamaIndex, or OpenAI libraries based on a specific query. Provides relevant text extracts to enhance understanding and integration in conversations.
Instructions
Search the docs for a given query and library.
Supports langchain, llama-index, and openai.
Args:
query: The query to search for (e.g.: "Chroma DB").
library: The library to search in. One of langchain, llama-index, openai.
max_chars: Maximum characters to return (default: 1000 for free tier).
Returns:
Text from the documentation.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| library | Yes | ||
| max_chars | No | ||
| query | Yes |
Implementation Reference
- main.py:63-87 (handler)The get_docs tool handler, decorated with @mcp.tool() for registration. Validates library, searches site-specific docs via Serper API, fetches and concatenates content from top results, truncates to max_chars.@mcp.tool() async def get_docs(query: str, library: str, max_chars: int = 1000): """ Search the docs for a given query and library. Supports langchain, llama-index, and openai. Args: query: The query to search for (e.g.: "Chroma DB"). library: The library to search in. One of langchain, llama-index, openai. max_chars: Maximum characters to return (default: 1000 for free tier). Returns: Text from the documentation. """ if library not in docs_urls: raise ValueError(f"Library {library} not supported. Supported libraries are: {', '.join(docs_urls.keys())}") url = f"site:{docs_urls[library]} {query}" results = await search_web(url) if len(results["organic"]) == 0: return "No results found." text = "" for result in results["organic"]: text += await fetch_url(result["link"]) return text[:max_chars] # Limit to max_chars characters
- main.py:65-76 (schema)Docstring defining input parameters (query, library, max_chars) and output for MCP tool schema.""" Search the docs for a given query and library. Supports langchain, llama-index, and openai. Args: query: The query to search for (e.g.: "Chroma DB"). library: The library to search in. One of langchain, llama-index, openai. max_chars: Maximum characters to return (default: 1000 for free tier). Returns: Text from the documentation. """
- main.py:24-43 (helper)Helper function to perform web search using Serper API, used by get_docs.async def search_web(query: str) -> dict | None: """ Search the web using the Serper API key for Google search, for the given query. """ payload = json.dumps({"q": query, "num": 2}) headers = { "X-API-KEY": os.getenv("SERPER_API_KEY"), "Content-Type": "application/json", } async with httpx.AsyncClient() as client: try: response = await client.post(url=SERPER_URL, headers=headers, data=payload, timeout=30.0) response.raise_for_status() return response.json() except httpx.TimeoutException: print("Timeout occurred while searching the web.") return {"organic": []}
- main.py:45-62 (helper)Helper function to fetch and extract main text content from a URL, used by get_docs.async def fetch_url(url: str): """ Fetch the content in the page of the URL using the Serper API key for Google search, for the given query. """ async with httpx.AsyncClient() as client: try: response = await client.get(url=url, timeout=30.0) soup = BeautifulSoup(response.text, "html.parser") # text = soup.get_text() # return text # Target main content areas instead of all text main_content = soup.find("main") or soup.find("article") or soup text = main_content.get_text(separator="\n\n", strip=True) return text except httpx.TimeoutException: return "Timeout occurred while fetching the URL."