Skip to main content
Glama

Dedalus MCP Documentation Server

by kitan23

ask_docs

Ask questions about documentation and receive AI-generated answers with sources. Provide a question, optional context documents, and user ID for rate limiting to get precise insights.

Instructions

Answer questions about documentation using AI Args: question: The question to answer context_docs: Optional list of document paths to use as context max_context_length: Maximum characters of context to include user_id: Optional user identifier for rate limiting Returns: AI-generated answer with sources

Input Schema

NameRequiredDescriptionDefault
context_docsNo
max_context_lengthNo
questionYes
user_idNo

Input Schema (JSON Schema)

{ "properties": { "context_docs": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Context Docs" }, "max_context_length": { "default": 4000, "title": "Max Context Length", "type": "integer" }, "question": { "title": "Question", "type": "string" }, "user_id": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User Id" } }, "required": [ "question" ], "title": "ask_docsArguments", "type": "object" }

Implementation Reference

  • The primary handler function for the 'ask_docs' tool. It implements the core logic: rate limiting, automatic document search via search_docs, context gathering, and AI response generation using OpenAI or fallback context provision.
    @mcp.tool() def ask_docs( question: str, context_docs: Optional[List[str]] = None, max_context_length: int = 4000, user_id: Optional[str] = None, ) -> Dict[str, Any]: """ Answer questions about documentation using AI Args: question: The question to answer context_docs: Optional list of document paths to use as context max_context_length: Maximum characters of context to include user_id: Optional user identifier for rate limiting Returns: AI-generated answer with sources """ # Rate limiting check identifier = user_id or 'default' if not rate_limiter.is_allowed(identifier): reset_time = rate_limiter.get_reset_time(identifier) return { 'error': 'Rate limit exceeded', 'message': f'Too many requests. Please wait {reset_time} seconds before trying again.', 'reset_in_seconds': reset_time, 'limit': '10 requests per minute', } # If no context docs specified, search for relevant ones if not context_docs: search_results = search_docs(question, max_results=3) context_docs = [result['path'] for result in search_results] # Gather context from documents context_parts = [] sources = [] total_length = 0 for doc_path in context_docs: if total_length >= max_context_length: break try: file_path = DOCS_DIR / doc_path content = file_path.read_text() # Truncate if needed remaining = max_context_length - total_length if len(content) > remaining: content = content[:remaining] + '...' context_parts.append(f'--- {doc_path} ---\n{content}') sources.append(doc_path) total_length += len(content) except (OSError, UnicodeDecodeError): continue if not context_parts: return { 'answer': "I couldn't find relevant documentation to answer your question.", 'sources': [], 'confidence': 'low', } full_context = '\n\n'.join(context_parts) # Try to use OpenAI if API key is available api_key = os.getenv('OPENAI_API_KEY') if api_key: try: from openai import OpenAI client = OpenAI(api_key=api_key) response = client.chat.completions.create( model='gpt-4o-mini', messages=[ { 'role': 'system', 'content': 'You are a helpful assistant that answers questions based on provided documentation. Only use information from the provided context.', }, { 'role': 'user', 'content': f"""Based on the following documentation, please answer this question: {question} Documentation: {full_context} Please provide a clear, concise answer based only on the provided documentation.""", }, ], temperature=0.7, max_tokens=500, ) return { 'answer': response.choices[0].message.content, 'sources': sources, 'context_length': total_length, 'model': 'gpt-4o-mini', 'confidence': 'high', } except Exception as e: # Fall back to context-only response if OpenAI fails return { 'answer': f'Error using OpenAI: {str(e)}', 'context': full_context[:500] + '...' if len(full_context) > 500 else full_context, 'sources': sources, 'context_length': total_length, 'error': str(e), } # If no API key, return context for Dedalus deployment return { 'question': question, 'context': full_context[:500] + '...' if len(full_context) > 500 else full_context, 'sources': sources, 'context_length': total_length, 'note': "No API key found. When deployed to Dedalus, this will use the platform's LLM integration via BYOK", }
  • src/main.py:38-55 (registration)
    The MCP server initialization where 'ask_docs' is described in the instructions string, indicating its registration and purpose within the toolset.
    mcp = FastMCP( name='Documentation Server', host=host, port=port, instructions="""This MCP server provides access to documentation files with AI-powered search and Q&A capabilities. Available tools: - list_docs(): List all documentation files - search_docs(query): Search documentation with keywords - ask_docs(question): Get AI-powered answers from documentation - index_docs(): Index documents for better search - analyze_docs(task): Analyze documentation for specific tasks Resources: - docs://{path}: Access any markdown documentation file directly This server includes rate limiting (10 requests/minute) to protect API keys.""", )
  • RateLimiter class used by ask_docs for rate limiting API requests (10/minute).
    class RateLimiter: """Simple rate limiter to protect API keys from abuse""" def __init__(self, max_requests: int = 10, window_seconds: int = 60): self.max_requests = max_requests self.window_seconds = window_seconds self.requests = defaultdict(list) def is_allowed(self, identifier: str) -> bool: """Check if request is allowed for this identifier""" now = time.time() # Clean old requests outside window self.requests[identifier] = [ req_time for req_time in self.requests[identifier] if now - req_time < self.window_seconds ] # Check if under limit if len(self.requests[identifier]) < self.max_requests: self.requests[identifier].append(now) return True return False def get_reset_time(self, identifier: str) -> int: """Get seconds until rate limit resets""" if not self.requests[identifier]: return 0 oldest = min(self.requests[identifier]) return max(0, int(self.window_seconds - (time.time() - oldest)))
  • The search_docs helper tool called internally by ask_docs to find relevant documents when context_docs not provided.
    @mcp.tool() def search_docs( query: str, max_results: int = 5, search_content: bool = True, search_titles: bool = True, ) -> List[Dict[str, Any]]: """ Search documentation using keyword matching (semantic search ready) Args: query: Search query string max_results: Maximum number of results to return search_content: Whether to search in document content search_titles: Whether to search in document titles Returns: List of matching documents with relevance scores """ query_lower = query.lower() results = [] for file_path in DOCS_DIR.rglob('*.md'): if not file_path.is_file(): continue score = 0 metadata = get_doc_metadata(file_path) # Title matching if search_titles and query_lower in metadata['title'].lower(): score += 10 # Content matching if search_content: try: content = file_path.read_text().lower() # Count occurrences occurrences = content.count(query_lower) if occurrences > 0: score += min(occurrences, 5) # Cap at 5 points for content # Find snippet around first occurrence idx = content.find(query_lower) start = max(0, idx - 100) end = min(len(content), idx + 100) snippet = content[start:end] if start > 0: snippet = '...' + snippet if end < len(content): snippet = snippet + '...' metadata['snippet'] = snippet except (OSError, UnicodeDecodeError): pass if score > 0: metadata['relevance_score'] = score results.append(metadata) # Sort by relevance score results.sort(key=lambda x: x['relevance_score'], reverse=True) return results[:max_results]
  • Input schema defined by function parameters and docstring for the ask_docs tool.
    def ask_docs( question: str, context_docs: Optional[List[str]] = None, max_context_length: int = 4000, user_id: Optional[str] = None, ) -> Dict[str, Any]: """ Answer questions about documentation using AI Args: question: The question to answer context_docs: Optional list of document paths to use as context max_context_length: Maximum characters of context to include user_id: Optional user identifier for rate limiting Returns: AI-generated answer with sources """

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kitan23/Python_MCP_Server_Example_2'

If you have feedback or need assistance with the MCP directory API, please join our Discord server