Skip to main content
Glama

sync_source

Fetch and index documents from external connectors into the search index. Supports running a single connector or all, with an optional dry-run mode to fetch without indexing.

Instructions

Fetch documents from one or all configured external connectors.

    Runs the enabled connectors defined in ``config.yaml`` and indexes
    each fetched document into the search index.

    Args:
        source_type: Connector type key to run, or ``null`` for all.
        dry_run: Fetch without indexing.
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
source_typeNo
dry_runNo

Implementation Reference

  • The `sync_source` MCP tool handler: fetches documents from configured external data connectors (e.g. jira, slack) and indexes them into MemoryMesh stores. Accepts optional `source_type` filter and `dry_run` flag. Returns status, doc counts, and per-connector results.
    def sync_source(
        source_type: Annotated[
            str | None,
            (
                "Connector type to sync, e.g. 'jira' or 'slack'.  "
                "Pass null to sync all enabled connectors."
            ),
        ] = None,
        dry_run: Annotated[
            bool,
            "If true, fetch documents but do not write to the index.",
        ] = False,
    ) -> dict:
        """Fetch documents from one or all configured external connectors.
    
        Runs the enabled connectors defined in ``config.yaml`` and indexes
        each fetched document into the search index.
    
        Args:
            source_type: Connector type key to run, or ``null`` for all.
            dry_run: Fetch without indexing.
        """
        from memorymesh.connectors.registry import get_connector_classes
        from memorymesh.server.auth_guard import check_access
    
        if (err := check_access(ctx, "index")) is not None:
            return err
    
        connectors_to_run = [
            c
            for c in ctx.config.connectors
            if c.enabled and (source_type is None or c.type == source_type)
        ]
    
        if not connectors_to_run:
            label = f"'{source_type}'" if source_type else "any"
            return {
                "status": "no_connectors",
                "message": f"No enabled connector found matching {label}.",
            }
    
        total_docs = 0
        total_errors = 0
        results: list[dict] = []
    
        for conn_cfg in connectors_to_run:
            entry: dict = {"type": conn_cfg.type, "docs": 0, "errors": 0}
            try:
                cfg_cls, conn_cls = get_connector_classes(conn_cfg.type)
                config_obj = cfg_cls(**conn_cfg.config)
                connector = conn_cls(config_obj)
            except (KeyError, Exception) as exc:
                entry["errors"] = 1
                entry["error"] = str(exc)
                results.append(entry)
                total_errors += 1
                continue
    
            source_name = getattr(config_obj, "source_name", conn_cfg.type)
            doc_count = 0
    
            try:
                for doc in connector.fetch_documents():
                    doc_count += 1
                    if not dry_run:
                        result = ctx.indexer.index_parsed_document(doc, source_name)
                        if result.status == "parse_error":
                            entry["errors"] = entry["errors"] + 1
                            total_errors += 1
            except Exception as exc:
                entry["errors"] = entry["errors"] + 1
                entry["error"] = str(exc)
                total_errors += 1
    
            entry["docs"] = doc_count
            results.append(entry)
            total_docs += doc_count
    
        return {
            "status": "ok",
            "dry_run": dry_run,
            "total_docs": total_docs,
            "total_errors": total_errors,
            "connectors": results,
        }
  • Input parameters of sync_source: `source_type` (optional string or null, defaults to None - meaning all enabled connectors) and `dry_run` (boolean, defaults to False).
        source_type: Annotated[
            str | None,
            (
                "Connector type to sync, e.g. 'jira' or 'slack'.  "
                "Pass null to sync all enabled connectors."
            ),
        ] = None,
        dry_run: Annotated[
            bool,
            "If true, fetch documents but do not write to the index.",
        ] = False,
    ) -> dict:
  • The `register()` function that registers sync_source onto the FastMCP instance via the `@mcp.tool()` decorator, with `AppContext` injected via closure.
    def register(mcp: FastMCP, ctx: AppContext) -> None:
        """Register the ``sync_source`` tool on *mcp* with *ctx* injected.
    
        Args:
            mcp: The FastMCP instance to register onto.
            ctx: Shared application context (injected via closure).
        """
    
        @mcp.tool()
  • Registration call in app.py: `sync_source.register(mcp, ctx)` which wires the tool into the main MCP server.
    sync_source.register(mcp, ctx)
  • Imports from `memorymesh.connectors.registry.get_connector_classes` (for resolving connector types) and `memorymesh.server.auth_guard.check_access` (for authorization check on 'index' action).
    from memorymesh.connectors.registry import get_connector_classes
    from memorymesh.server.auth_guard import check_access
    
    if (err := check_access(ctx, "index")) is not None:
        return err
    
    connectors_to_run = [
        c
        for c in ctx.config.connectors
        if c.enabled and (source_type is None or c.type == source_type)
    ]
    
    if not connectors_to_run:
        label = f"'{source_type}'" if source_type else "any"
        return {
            "status": "no_connectors",
            "message": f"No enabled connector found matching {label}.",
        }
    
    total_docs = 0
    total_errors = 0
    results: list[dict] = []
    
    for conn_cfg in connectors_to_run:
        entry: dict = {"type": conn_cfg.type, "docs": 0, "errors": 0}
        try:
            cfg_cls, conn_cls = get_connector_classes(conn_cfg.type)
            config_obj = cfg_cls(**conn_cfg.config)
            connector = conn_cls(config_obj)
        except (KeyError, Exception) as exc:
            entry["errors"] = 1
            entry["error"] = str(exc)
            results.append(entry)
            total_errors += 1
            continue
    
        source_name = getattr(config_obj, "source_name", conn_cfg.type)
        doc_count = 0
    
        try:
            for doc in connector.fetch_documents():
                doc_count += 1
                if not dry_run:
                    result = ctx.indexer.index_parsed_document(doc, source_name)
                    if result.status == "parse_error":
                        entry["errors"] = entry["errors"] + 1
                        total_errors += 1
        except Exception as exc:
            entry["errors"] = entry["errors"] + 1
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It mentions fetching and indexing, and the dry_run option skips indexing, but does not discuss side effects (e.g., whether indexing is additive or replaces), required permissions, idempotency, or error handling. Minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loaded with the main purpose, and structured with argument descriptions. It avoids unnecessary words, though the docstring format could be slightly more compact. Overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should cover return values, side effects on the search index, error conditions, and concurrency. It does not mention what the tool returns, whether it is destructive, or how it interacts with other tools like index_now. The description is inadequate for a state-modifying sync operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema coverage is 0% (no parameter descriptions), so the description must add meaning. It explains source_type as connector key or null for all, and dry_run as fetch without indexing, which provides value. However, it lacks specifics on valid source_type values or further details, so it only partially compensates for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Fetch documents from one or all configured external connectors') and the resource (external connectors). It differentiates from siblings like forget_source or list_sources by specifying it fetches and indexes, making the purpose specific and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool (to run enabled connectors and index documents) and how to specify arguments (source_type and dry_run). However, it does not explicitly guide when not to use it or compare with alternatives like forget_source, leaving implicit usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kilhubprojects/memory-mesh'

If you have feedback or need assistance with the MCP directory API, please join our Discord server