get_docs
Search and retrieve the latest documentation for specific queries and libraries. Supports langchain, llama-index, autogen, agno, openai-agents-sdk, mcp-doc, camel-ai, and crew-ai. Input query and library to extract relevant text.
Instructions
搜索给定查询和库的最新文档。
支持 langchain、llama-index、autogen、agno、openai-agents-sdk、mcp-doc、camel-ai 和 crew-ai。
参数:
query: 要搜索的查询 (例如 "React Agent")
library: 要搜索的库 (例如 "agno")
返回:
文档中的文本
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| library | Yes | ||
| query | Yes |
Implementation Reference
- main.py:64-89 (handler)The handler function for the 'get_docs' tool, registered via @mcp.tool() decorator. Searches for documentation by constructing a site-specific Google search query using Serper API, fetches top results, extracts text using BeautifulSoup, and returns concatenated text.@mcp.tool() async def get_docs(query: str, library: str): """ 搜索给定查询和库的最新文档。 支持 langchain、llama-index、autogen、agno、openai-agents-sdk、mcp-doc、camel-ai 和 crew-ai。 参数: query: 要搜索的查询 (例如 "React Agent") library: 要搜索的库 (例如 "agno") 返回: 文档中的文本 """ if library not in docs_urls: raise ValueError(f"Library {library} not supported by this tool") query = f"site:{docs_urls[library]} {query}" results = await search_web(query) if len(results["organic"]) == 0: return "No results found" text = "" for result in results["organic"]: text += await fetch_url(result["link"]) return text
- main.py:36-53 (helper)Helper function to perform web search using Serper API, returning search results or empty organic list on timeout.async def search_web(query: str) -> dict | None: payload = json.dumps({"q": query, "num": 2}) headers = { "X-API-KEY": os.getenv("SERPER_API_KEY"), "Content-Type": "application/json", } async with httpx.AsyncClient() as client: try: response = await client.post( SERPER_URL, headers=headers, data=payload, timeout=30.0 ) response.raise_for_status() return response.json() except httpx.TimeoutException: return {"organic": []}
- main.py:54-62 (helper)Helper function to fetch URL content, parse with BeautifulSoup, extract text, handle timeout.async def fetch_url(url: str): async with httpx.AsyncClient() as client: try: response = await client.get(url, timeout=30.0) soup = BeautifulSoup(response.text, "html.parser") text = soup.get_text() return text except httpx.TimeoutException: return "Timeout error"
- main.py:25-34 (helper)Dictionary of supported libraries mapped to their documentation base URLs, used to restrict search to specific sites.docs_urls = { "langchain": "python.langchain.com/docs", "llama-index": "docs.llamaindex.ai/en/stable", "autogen":"microsoft.github.io/autogen/stable", "agno":"docs.agno.com", "openai-agents-sdk": "openai.github.io/openai-agents-python", "mcp-doc":"modelcontextprotocol.io", "camel-ai":"docs.camel-ai.org", "crew-ai":"docs.crewai.com" }