Skip to main content
Glama
Unstructured-IO

Unstructured API MCP Server

Official

create_destination_connector

Configure data export to destinations like databases, cloud storage, or vector stores by specifying connector type and required parameters.

Instructions

Create a destination connector based on type.

Args:
    ctx: Context object with the request and lifespan context
    name: A unique name for this connector
    destination_type: The type of destination being created

    type_specific_config:
        astradb:
            collection_name: The AstraDB collection name
            keyspace: The AstraDB keyspace
            batch_size: (Optional[int]) The batch size for inserting documents
        databricks_delta_table:
            catalog: Name of the catalog in Databricks Unity Catalog
            database: The database in Unity Catalog
            http_path: The cluster’s or SQL warehouse’s HTTP Path value
            server_hostname: The Databricks cluster’s or SQL warehouse’s Server Hostname value
            table_name: The name of the table in the schema
            volume: Name of the volume associated with the schema.
            schema: (Optional[str]) Name of the schema associated with the volume
            volume_path: (Optional[str]) Any target folder path within the volume, starting
                        from the root of the volume.
        databricks_volumes:
            catalog: Name of the catalog in Databricks
            host: The Databricks host URL
            volume: Name of the volume associated with the schema
            schema: (Optional[str]) Name of the schema associated with the volume. The default
                     value is "default".
            volume_path: (Optional[str]) Any target folder path within the volume,
                        starting from the root of the volume.
        mongodb:
            database: The name of the MongoDB database
            collection: The name of the MongoDB collection
        neo4j:
            database: The Neo4j database, e.g. "neo4j"
            uri: The Neo4j URI e.g. neo4j+s://<neo4j_instance_id>.databases.neo4j.io
            batch_size: (Optional[int]) The batch size for the connector
        pinecone:
            index_name: The Pinecone index name
            namespace: (Optional[str]) The pinecone namespace, a folder inside the
                       pinecone index
            batch_size: (Optional[int]) The batch size
        s3:
            remote_url: The S3 URI to the bucket or folder
        weaviate:
            cluster_url: URL of the Weaviate cluster
            collection: Name of the collection in the Weaviate cluster

            Note: Minimal schema is required for the collection, e.g. record_id: Text

Returns:
    String containing the created destination connector information

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYes
destination_typeYes
type_specific_configYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The primary handler function implementing the 'create_destination_connector' MCP tool. It dispatches creation logic to type-specific helper functions based on the provided destination_type.
    async def create_destination_connector(
        ctx: Context,
        name: str,
        destination_type: Literal[
            "astradb",
            "databricks_delta_table",
            "databricks_volumes",
            "mongodb",
            "neo4j",
            "pinecone",
            "s3",
            "weaviate",
        ],
        type_specific_config: dict[str, Any],
    ) -> str:
        """Create a destination connector based on type.
    
        Args:
            ctx: Context object with the request and lifespan context
            name: A unique name for this connector
            destination_type: The type of destination being created
    
            type_specific_config:
                astradb:
                    collection_name: The AstraDB collection name
                    keyspace: The AstraDB keyspace
                    batch_size: (Optional[int]) The batch size for inserting documents
                databricks_delta_table:
                    catalog: Name of the catalog in Databricks Unity Catalog
                    database: The database in Unity Catalog
                    http_path: The cluster’s or SQL warehouse’s HTTP Path value
                    server_hostname: The Databricks cluster’s or SQL warehouse’s Server Hostname value
                    table_name: The name of the table in the schema
                    volume: Name of the volume associated with the schema.
                    schema: (Optional[str]) Name of the schema associated with the volume
                    volume_path: (Optional[str]) Any target folder path within the volume, starting
                                from the root of the volume.
                databricks_volumes:
                    catalog: Name of the catalog in Databricks
                    host: The Databricks host URL
                    volume: Name of the volume associated with the schema
                    schema: (Optional[str]) Name of the schema associated with the volume. The default
                             value is "default".
                    volume_path: (Optional[str]) Any target folder path within the volume,
                                starting from the root of the volume.
                mongodb:
                    database: The name of the MongoDB database
                    collection: The name of the MongoDB collection
                neo4j:
                    database: The Neo4j database, e.g. "neo4j"
                    uri: The Neo4j URI e.g. neo4j+s://<neo4j_instance_id>.databases.neo4j.io
                    batch_size: (Optional[int]) The batch size for the connector
                pinecone:
                    index_name: The Pinecone index name
                    namespace: (Optional[str]) The pinecone namespace, a folder inside the
                               pinecone index
                    batch_size: (Optional[int]) The batch size
                s3:
                    remote_url: The S3 URI to the bucket or folder
                weaviate:
                    cluster_url: URL of the Weaviate cluster
                    collection: Name of the collection in the Weaviate cluster
    
                    Note: Minimal schema is required for the collection, e.g. record_id: Text
    
        Returns:
            String containing the created destination connector information
        """
        destination_functions = {
            "astradb": create_astradb_destination,
            "databricks_delta_table": create_databricks_delta_table_destination,
            "databricks_volumes": create_databricks_volumes_destination,
            "mongodb": create_mongodb_destination,
            "neo4j": create_neo4j_destination,
            "pinecone": create_pinecone_destination,
            "s3": create_s3_destination,
            "weaviate": create_weaviate_destination,
        }
    
        if destination_type in destination_functions:
            destination_function = destination_functions[destination_type]
            return await destination_function(ctx=ctx, name=name, **type_specific_config)
    
        return (
            f"Unsupported destination type: {destination_type}. "
            f"Please use a supported destination type {list(destination_functions.keys())}."
        )
  • Direct registration of the 'create_destination_connector' tool using mcp.tool() decorator within the destination connectors module.
    def register_destination_connectors(mcp: FastMCP):
        """Register all destination connector tools with the MCP server."""
        mcp.tool()(create_destination_connector)
        mcp.tool()(update_destination_connector)
        mcp.tool()(delete_destination_connector)
  • Top-level call to register destination connectors (including 'create_destination_connector') as part of all connectors registration.
    register_destination_connectors(mcp)
  • Type annotations defining the input schema for the tool, including supported destination types via Literal.
    async def create_destination_connector(
        ctx: Context,
        name: str,
        destination_type: Literal[
            "astradb",
            "databricks_delta_table",
            "databricks_volumes",
            "mongodb",
            "neo4j",
            "pinecone",
            "s3",
            "weaviate",
        ],
        type_specific_config: dict[str, Any],
    ) -> str:
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions the tool creates something (implying mutation), it doesn't disclose permission requirements, whether the operation is idempotent, error conditions, or what happens if a connector with the same name exists. The description provides some context about configuration options but lacks critical behavioral information for a creation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately structured with clear sections (Args, Returns) but is quite lengthy due to the detailed parameter documentation. While this length is justified given the complexity, it could benefit from a brief introductory sentence explaining what a destination connector is before diving into parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters with nested objects, 8 destination types) and no annotations, the description does a good job covering parameter semantics. The existence of an output schema means the description doesn't need to explain return values. However, it lacks context about the broader system and how this tool fits into workflows with other tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description provides extensive parameter documentation that fully compensates. It explains all three parameters (name, destination_type, type_specific_config) and provides detailed configuration options for each destination type, including optional parameters and default values. This adds significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a destination connector based on type, which is a specific verb+resource combination. It distinguishes from siblings like 'update_destination_connector' and 'delete_destination_connector' by focusing on creation, but doesn't explicitly differentiate from 'create_source_connector' or explain what a destination connector is in this context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives. It doesn't mention prerequisites, when creation is appropriate versus updating existing connectors, or how this relates to sibling tools like 'create_source_connector' or 'create_workflow' in the broader system context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Unstructured-IO/UNS-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server