Skip to main content
Glama
severity1

terraform-cloud-mcp

list_runs_in_workspace

Retrieve and filter run history for a Terraform Cloud workspace to audit changes, troubleshoot issues, or monitor deployment activity.

Instructions

List runs in a workspace with filtering and pagination

Retrieves run history for a specific workspace with options to filter by status, operation type, source, and other criteria. Useful for auditing changes, troubleshooting, or monitoring deployment history.

API endpoint: GET /workspaces/{workspace_id}/runs

Args: workspace_id: The workspace ID to list runs for (format: "ws-xxxxxxxx") page_number: Page number to fetch (default: 1) page_size: Number of results per page (default: 20) filter_operation: Filter by operation type filter_status: Filter by status filter_source: Filter by source filter_status_group: Filter by status group filter_timeframe: Filter by timeframe filter_agent_pool_names: Filter by agent pool names search_user: Search by VCS username search_commit: Search by commit SHA search_basic: Search across run ID, message, commit SHA, and username

Returns: List of runs with metadata, status info, and pagination details

See: docs/tools/run.md for reference documentation

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
workspace_idYes
page_numberNo
page_sizeNo
filter_operationNo
filter_statusNo
filter_sourceNo
filter_status_groupNo
filter_timeframeNo
filter_agent_pool_namesNo
search_userNo
search_commitNo
search_basicNo

Implementation Reference

  • The handler function that executes the tool logic: creates a validated request model, builds query parameters, and makes the API request to list runs in the specified workspace.
    @handle_api_errors
    async def list_runs_in_workspace(
        workspace_id: str,
        page_number: int = 1,
        page_size: int = 20,
        filter_operation: Optional[str] = None,
        filter_status: Optional[str] = None,
        filter_source: Optional[str] = None,
        filter_status_group: Optional[str] = None,
        filter_timeframe: Optional[str] = None,
        filter_agent_pool_names: Optional[str] = None,
        search_user: Optional[str] = None,
        search_commit: Optional[str] = None,
        search_basic: Optional[str] = None,
    ) -> APIResponse:
        """List runs in a workspace with filtering and pagination
    
        Retrieves run history for a specific workspace with options to filter by status,
        operation type, source, and other criteria. Useful for auditing changes, troubleshooting,
        or monitoring deployment history.
    
        API endpoint: GET /workspaces/{workspace_id}/runs
    
        Args:
            workspace_id: The workspace ID to list runs for (format: "ws-xxxxxxxx")
            page_number: Page number to fetch (default: 1)
            page_size: Number of results per page (default: 20)
            filter_operation: Filter by operation type
            filter_status: Filter by status
            filter_source: Filter by source
            filter_status_group: Filter by status group
            filter_timeframe: Filter by timeframe
            filter_agent_pool_names: Filter by agent pool names
            search_user: Search by VCS username
            search_commit: Search by commit SHA
            search_basic: Search across run ID, message, commit SHA, and username
    
        Returns:
            List of runs with metadata, status info, and pagination details
    
        See:
            docs/tools/run.md for reference documentation
        """
        # Create request using Pydantic model for validation
        request = RunListInWorkspaceRequest(
            workspace_id=workspace_id,
            page_number=page_number,
            page_size=page_size,
            filter_operation=filter_operation,
            filter_status=filter_status,
            filter_source=filter_source,
            filter_status_group=filter_status_group,
            filter_timeframe=filter_timeframe,
            filter_agent_pool_names=filter_agent_pool_names,
            search_user=search_user,
            search_commit=search_commit,
            search_basic=search_basic,
        )
    
        # Use the unified query params utility function
        params = query_params(request)
    
        # Make API request
        return await api_request(
            f"workspaces/{workspace_id}/runs", method="GET", params=params
        )
  • Pydantic model defining and validating the input parameters for the list_runs_in_workspace tool, including workspace_id and various filter/search options.
    class RunListInWorkspaceRequest(APIRequest):
        """Request parameters for listing runs in a workspace.
    
        Used with the GET /workspaces/{workspace_id}/runs endpoint to retrieve
        and filter run data for a specific workspace.
    
        Reference: https://developer.hashicorp.com/terraform/cloud-docs/api-docs/run#list-runs-in-a-workspace
    
        See:
            docs/models/run.md for reference
        """
    
        workspace_id: str = Field(
            ...,
            description="The workspace ID to list runs for",
            pattern=r"^ws-[a-zA-Z0-9]{16}$",  # Standardized workspace ID pattern
        )
        page_number: Optional[int] = Field(1, ge=1, description="Page number to fetch")
        page_size: Optional[int] = Field(
            20, ge=1, le=100, description="Number of results per page"
        )
        filter_operation: Optional[str] = Field(
            None,
            description="Filter runs by operation type, comma-separated",
            max_length=100,
        )
        filter_status: Optional[str] = Field(
            None, description="Filter runs by status, comma-separated", max_length=100
        )
        filter_source: Optional[str] = Field(
            None, description="Filter runs by source, comma-separated", max_length=100
        )
        filter_status_group: Optional[str] = Field(
            None, description="Filter runs by status group", max_length=50
        )
        filter_timeframe: Optional[str] = Field(
            None, description="Filter runs by timeframe", max_length=50
        )
        filter_agent_pool_names: Optional[str] = Field(
            None,
            description="Filter runs by agent pool names, comma-separated",
            max_length=100,
        )
        search_user: Optional[str] = Field(
            None, description="Search for runs by VCS username", max_length=100
        )
        search_commit: Optional[str] = Field(
            None, description="Search for runs by commit SHA", max_length=40
        )
        search_basic: Optional[str] = Field(
            None,
            description="Basic search across run ID, message, commit SHA, and username",
            max_length=100,
        )
  • Registers the list_runs_in_workspace function as an MCP tool in the FastMCP server.
    mcp.tool()(runs.list_runs_in_workspace)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions filtering and pagination capabilities, which is helpful. However, it doesn't address important behavioral aspects like rate limits, authentication requirements, error conditions, or whether this is a read-only operation (though implied by 'List'). The API endpoint reference adds some technical context but doesn't fully compensate for missing behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, usage, API endpoint, args, returns, reference). It's appropriately sized for a tool with 12 parameters. The 'See' reference to external documentation is useful but could be considered extraneous. Most sentences earn their place by adding value beyond the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (12 parameters, no annotations, no output schema), the description does a reasonable job but has gaps. It explains the purpose and parameters well, but lacks details about return structure (only mentions 'List of runs with metadata, status info, and pagination details' without specifics), error handling, or authentication requirements. For a tool with this many parameters and no structured output schema, more detail would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and 12 parameters, the description provides significant value by explaining the purpose of most parameters in the 'Args' section. It clarifies what each filter does (e.g., 'filter by operation type', 'filter by status', 'search by VCS username') and provides format guidance for workspace_id. However, it doesn't explain parameter formats for filters like 'filter_timeframe' or 'filter_status_group', leaving some ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List runs in a workspace with filtering and pagination'), identifies the resource ('runs'), and distinguishes it from siblings like 'list_runs_in_organization' by specifying workspace scope. It provides a verb+resource+scope combination that is precise and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Useful for auditing changes, troubleshooting, or monitoring deployment history'), providing clear context. However, it doesn't specify when NOT to use it or mention alternatives like 'list_runs_in_organization' for organization-level runs, which would be helpful for sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/severity1/terraform-cloud-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server