Skip to main content
Glama

add_copy_activity_to_pipeline

Add a data copy operation to an existing Microsoft Fabric pipeline to transfer data from source databases to a Lakehouse destination.

Instructions

Add a Copy Activity to an existing Fabric pipeline.

Retrieves an existing pipeline, adds a Copy Activity to it, and updates the pipeline definition. The Copy Activity will be appended to any existing activities in the pipeline.

Use this tool when:

  • You have an existing pipeline and want to add a new Copy Activity

  • You're building complex pipelines with multiple data copy operations

  • You want to incrementally build a pipeline

Parameters: workspace_name: The display name of the workspace containing the pipeline. pipeline_name: Name of the existing pipeline to update. source_type: Type of source (e.g., "AzurePostgreSqlSource", "AzureSqlSource", "SqlServerSource"). source_connection_id: Fabric workspace connection ID for source database. source_table_schema: Schema name of the source table (e.g., "public", "dbo"). source_table_name: Name of the source table (e.g., "movie"). destination_lakehouse_id: Workspace artifact ID of the destination Lakehouse. destination_connection_id: Fabric workspace connection ID for destination Lakehouse. destination_table_name: Name for the destination table in Lakehouse. activity_name: Optional custom name for the activity (default: auto-generated). source_access_mode: Source access mode ("direct" or "sql"). Default is "direct". source_sql_query: Optional SQL query for sql access mode. table_action_option: Table action option (default: "Append", options: "Append", "Overwrite"). apply_v_order: Apply V-Order optimization (default: True). timeout: Activity timeout (default: "0.12:00:00"). retry: Number of retry attempts (default: 0). retry_interval_seconds: Retry interval in seconds (default: 30).

Returns: Dictionary with status, pipeline_id, pipeline_name, activity_name, workspace_name, and message.

Example: ```python # First, get the lakehouse and connection IDs lakehouses = list_items(workspace_name="Analytics", item_type="Lakehouse") lakehouse_id = lakehouses["items"][0]["id"] lakehouse_conn_id = "a216973e-47d7-4224-bb56-2c053bac6831"

# Add a Copy Activity to an existing pipeline result = add_copy_activity_to_pipeline( workspace_name="Analytics Workspace", pipeline_name="My_Existing_Pipeline", source_type="AzurePostgreSqlSource", source_connection_id="12345678-1234-1234-1234-123456789abc", source_table_schema="public", source_table_name="orders", destination_lakehouse_id=lakehouse_id, destination_connection_id=lakehouse_conn_id, destination_table_name="orders", activity_name="CopyOrdersData", table_action_option="Overwrite" ) # Add another Copy Activity to the same pipeline result = add_copy_activity_to_pipeline( workspace_name="Analytics Workspace", pipeline_name="My_Existing_Pipeline", source_type="AzurePostgreSqlSource", source_connection_id="12345678-1234-1234-1234-123456789abc", source_table_schema="public", source_table_name="customers", destination_lakehouse_id=lakehouse_id, destination_connection_id=lakehouse_conn_id, destination_table_name="customers", activity_name="CopyCustomersData" ) # SQL fallback mode (use when direct Lakehouse copy fails with # "datasource type Lakehouse is invalid" error): result = add_copy_activity_to_pipeline( workspace_name="Analytics Workspace", pipeline_name="My_Existing_Pipeline", source_type="LakehouseTableSource", source_connection_id=sql_endpoint_conn_id, # SQL analytics endpoint connection source_table_schema="dbo", source_table_name="fact_sale", destination_lakehouse_id=lakehouse_id, destination_connection_id=lakehouse_conn_id, destination_table_name="fact_sale_copy", source_access_mode="sql", source_sql_query="SELECT * FROM dbo.fact_sale" # optional ) ```

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
workspace_nameYes
pipeline_nameYes
source_typeYes
source_connection_idYes
source_table_schemaYes
source_table_nameYes
destination_lakehouse_idYes
destination_connection_idYes
destination_table_nameYes
activity_nameNo
source_access_modeNodirect
source_sql_queryNo
table_action_optionNoAppend
apply_v_orderNo
timeoutNo0.12:00:00
retryNo
retry_interval_secondsNo

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bablulawrence/ms-fabric-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server