Skip to main content
Glama
kitan23

Dedalus MCP Documentation Server

by kitan23

index_docs

Index or re-index all documentation to improve search functionality and query performance across the documentation server.

Instructions

Index or re-index all documentation for improved search

Args:
    rebuild: Whether to rebuild the entire index from scratch

Returns:
    Indexing statistics

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
rebuildNo

Implementation Reference

  • The handler function for the 'index_docs' MCP tool. It clears caches if rebuild=True, then iterates over all .md files in DOCS_DIR, calls get_doc_metadata on each, accumulates stats on files indexed and total size, catches errors, and returns indexing statistics. Registered via the @mcp.tool() decorator which also defines the schema from the signature and docstring.
    @mcp.tool()
    def index_docs(rebuild: bool = False) -> Dict[str, Any]:
        """
        Index or re-index all documentation for improved search
    
        Args:
            rebuild: Whether to rebuild the entire index from scratch
    
        Returns:
            Indexing statistics
        """
        if rebuild:
            METADATA_CACHE.clear()
            EMBEDDINGS_CACHE.clear()
    
        stats = {
            'files_indexed': 0,
            'total_size': 0,
            'errors': [],
            'timestamp': datetime.now().isoformat(),
        }
    
        for file_path in DOCS_DIR.rglob('*.md'):
            try:
                if file_path.is_file():
                    metadata = get_doc_metadata(file_path)
                    stats['files_indexed'] += 1
                    stats['total_size'] += metadata['size']
    
                    # Here you would generate embeddings for semantic search
                    # EMBEDDINGS_CACHE[file_path] = generate_embeddings(content)
            except Exception as e:
                stats['errors'].append({'file': str(file_path), 'error': str(e)})
    
        return stats
  • Helper function used by index_docs (and other tools) to extract metadata like title, path, modified time, size, hash from markdown files, caching the results in METADATA_CACHE. Attempts to parse title from first H1 heading.
    def get_doc_metadata(file_path: Path) -> Dict[str, Any]:
        """Extract metadata from markdown files"""
        if file_path in METADATA_CACHE:
            return METADATA_CACHE[file_path]
    
        metadata = {
            'title': file_path.stem.replace('-', ' ').title(),
            'path': str(file_path.relative_to(DOCS_DIR)),
            'modified': datetime.fromtimestamp(file_path.stat().st_mtime).isoformat(),
            'size': file_path.stat().st_size,
            'hash': hashlib.md5(file_path.read_bytes()).hexdigest(),
        }
    
        # Try to extract title from first # heading
        try:
            content = file_path.read_text()
            lines = content.split('\n')
            for line in lines[:10]:  # Check first 10 lines
                if line.startswith('# '):
                    metadata['title'] = line[2:].strip()
                    break
        except (OSError, UnicodeDecodeError):
            pass
    
        METADATA_CACHE[file_path] = metadata
        return metadata

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kitan23/Python_MCP_Server_Example_2'

If you have feedback or need assistance with the MCP directory API, please join our Discord server