Skip to main content
Glama

sync_catalog

Scans markdown files in the catalog directory to build in-memory indices for poetry management. Call this tool before using other catalog functions to ensure your poetry collection is properly indexed.

Instructions

Synchronize catalog from filesystem.

Scans all markdown files in catalog/ directory and builds in-memory indices. This should be called before using other catalog tools.

Args: force_rescan: If True, rescan all files even if already loaded

Returns: SyncResult with statistics about the sync operation

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
force_rescanNo

Implementation Reference

  • The sync_catalog tool handler: decorated with @mcp.tool(), retrieves the catalog instance, calls catalog.sync(), logs progress, and returns SyncResult.
    @mcp.tool() async def sync_catalog(force_rescan: bool = False) -> SyncResult: """ Synchronize catalog from filesystem. Scans all markdown files in catalog/ directory and builds in-memory indices. This should be called before using other catalog tools. Args: force_rescan: If True, rescan all files even if already loaded Returns: SyncResult with statistics about the sync operation """ logger.info(f"Syncing catalog (force_rescan={force_rescan})...") cat = get_catalog() result = cat.sync(force_rescan=force_rescan) logger.info(f"Sync complete: {result.total_poems} poems") return result
  • Pydantic BaseModel defining the SyncResult returned by the sync_catalog tool, including fields for sync statistics.
    class SyncResult(BaseModel): """ Result from sync_catalog operation. Reports statistics about catalog synchronization: how many poems were discovered, added, updated, or skipped. """ total_poems: int = Field( ..., description="Total number of poems in catalog after sync" ) new_poems: int = Field( ..., description="Number of new poems discovered in this sync" ) updated_poems: int = Field( ..., description="Number of existing poems with updated metadata" ) skipped_poems: int = Field( default=0, description="Number of poems skipped due to parse errors" ) warnings: list[str] = Field( default_factory=list, description="List of warning messages encountered during sync" ) duration_seconds: float = Field( ..., description="Time taken for sync operation" ) class Config: """Pydantic configuration.""" json_schema_extra = { "example": { "total_poems": 381, "new_poems": 5, "updated_poems": 12, "skipped_poems": 2, "warnings": [ "poem_without_frontmatter.md: missing frontmatter, used defaults", "broken_file.md: invalid YAML, skipped" ], "duration_seconds": 2.34 } }
  • Core Catalog.sync() method implementing the filesystem scan, poem parsing, indexing, statistics tracking, and SyncResult creation called by the tool handler.
    def sync( self, force_rescan: bool = False, update_missing_metadata: bool = True ) -> SyncResult: """ Sync catalog from filesystem. Scans catalog/ directory recursively for .md files and builds indices. Args: force_rescan: If True, rescan all files even if already loaded update_missing_metadata: Auto-populate missing frontmatter Returns: SyncResult with statistics """ start_time = time.perf_counter() if force_rescan: self.index.clear() # Track statistics total_before = len(self.index.all_poems) new_poems = 0 updated_poems = 0 skipped_poems = 0 warnings: list[str] = [] # Scan for markdown files logger.info(f"Scanning catalog directory: {self.catalog_dir}") if not self.catalog_dir.exists(): raise FileNotFoundError(f"Catalog directory not found: {self.catalog_dir}") markdown_files = list(self.catalog_dir.rglob("*.md")) logger.info(f"Found {len(markdown_files)} markdown files") # Parse each file for md_file in markdown_files: try: poem = parse_poem_file(md_file, self.vault_root) # Check if poem already exists existing = self.index.get_by_id(poem.id) if existing: # Check if updated (don't actually need to track this for now) # Just always add the new version if not force_rescan: updated_poems += 1 else: new_poems += 1 # Always add the poem (will overwrite if exists) self.index.add_poem(poem) except FrontmatterParseError as e: skipped_poems += 1 warning_msg = f"{md_file.name}: {str(e)}" warnings.append(warning_msg) logger.warning(warning_msg) except Exception as e: skipped_poems += 1 warning_msg = f"{md_file.name}: Unexpected error: {str(e)}" warnings.append(warning_msg) logger.error(warning_msg) total_after = len(self.index.all_poems) duration = time.perf_counter() - start_time # Update last sync timestamp from datetime import datetime self.last_sync = datetime.now().isoformat() logger.info( f"Catalog sync complete: {total_after} poems " f"({new_poems} new, {updated_poems} updated, {skipped_poems} skipped) " f"in {duration:.2f}s" ) return SyncResult( total_poems=total_after, new_poems=new_poems, updated_poems=updated_poems, skipped_poems=skipped_poems, warnings=warnings, duration_seconds=duration )

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/james-livefront/poetry-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server